Test Report: KVM_Linux_crio 18932

                    
                      ef88892450886ee42051bb5f4cefdb4041e06670:2024-05-20:34547
                    
                

Test fail (13/207)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-972916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-972916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.958191333s)

                                                
                                                
-- stdout --
	* [addons-972916] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-972916" primary control-plane node in "addons-972916" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	  - Using image docker.io/marcnuri/yakd:0.0.4
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	  - Using image docker.io/busybox:stable
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-972916 service yakd-dashboard -n yakd-dashboard
	
	* Verifying registry addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying csi-hostpath-driver addon...
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-972916 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, inspektor-gadget, helm-tiller, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:52:20.060897  860889 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:52:20.061156  860889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:52:20.061165  860889 out.go:304] Setting ErrFile to fd 2...
	I0520 11:52:20.061172  860889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:52:20.061359  860889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 11:52:20.061942  860889 out.go:298] Setting JSON to false
	I0520 11:52:20.062903  860889 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5688,"bootTime":1716200252,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:52:20.062964  860889 start.go:139] virtualization: kvm guest
	I0520 11:52:20.065056  860889 out.go:177] * [addons-972916] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:52:20.066241  860889 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 11:52:20.067364  860889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:52:20.066240  860889 notify.go:220] Checking for updates...
	I0520 11:52:20.069547  860889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 11:52:20.070645  860889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 11:52:20.071817  860889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:52:20.072917  860889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:52:20.074130  860889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:52:20.104719  860889 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 11:52:20.105802  860889 start.go:297] selected driver: kvm2
	I0520 11:52:20.105811  860889 start.go:901] validating driver "kvm2" against <nil>
	I0520 11:52:20.105822  860889 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:52:20.106513  860889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:52:20.106584  860889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:52:20.121060  860889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:52:20.121102  860889 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:52:20.121318  860889 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:52:20.121342  860889 cni.go:84] Creating CNI manager for ""
	I0520 11:52:20.121357  860889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:20.121371  860889 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 11:52:20.121423  860889 start.go:340] cluster config:
	{Name:addons-972916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-972916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:52:20.121508  860889 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:52:20.122981  860889 out.go:177] * Starting "addons-972916" primary control-plane node in "addons-972916" cluster
	I0520 11:52:20.123979  860889 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:52:20.124007  860889 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:52:20.124017  860889 cache.go:56] Caching tarball of preloaded images
	I0520 11:52:20.124100  860889 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:52:20.124110  860889 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:52:20.124387  860889 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/config.json ...
	I0520 11:52:20.124405  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/config.json: {Name:mk3186fdbd3d566bf93cec479ae9d5693284ec79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:20.124521  860889 start.go:360] acquireMachinesLock for addons-972916: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:52:20.124563  860889 start.go:364] duration metric: took 29.847µs to acquireMachinesLock for "addons-972916"
	I0520 11:52:20.124580  860889 start.go:93] Provisioning new machine with config: &{Name:addons-972916 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-972916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:52:20.124645  860889 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 11:52:20.126049  860889 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 11:52:20.126162  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:20.126206  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:20.139544  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44255
	I0520 11:52:20.140000  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:20.140523  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:52:20.140544  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:20.140880  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:20.141056  860889 main.go:141] libmachine: (addons-972916) Calling .GetMachineName
	I0520 11:52:20.141186  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:52:20.141347  860889 start.go:159] libmachine.API.Create for "addons-972916" (driver="kvm2")
	I0520 11:52:20.141387  860889 client.go:168] LocalClient.Create starting
	I0520 11:52:20.141420  860889 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 11:52:20.229810  860889 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 11:52:20.274207  860889 main.go:141] libmachine: Running pre-create checks...
	I0520 11:52:20.274229  860889 main.go:141] libmachine: (addons-972916) Calling .PreCreateCheck
	I0520 11:52:20.274733  860889 main.go:141] libmachine: (addons-972916) Calling .GetConfigRaw
	I0520 11:52:20.275196  860889 main.go:141] libmachine: Creating machine...
	I0520 11:52:20.275213  860889 main.go:141] libmachine: (addons-972916) Calling .Create
	I0520 11:52:20.275355  860889 main.go:141] libmachine: (addons-972916) Creating KVM machine...
	I0520 11:52:20.276645  860889 main.go:141] libmachine: (addons-972916) DBG | found existing default KVM network
	I0520 11:52:20.277493  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:20.277335  860911 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0520 11:52:20.277536  860889 main.go:141] libmachine: (addons-972916) DBG | created network xml: 
	I0520 11:52:20.277556  860889 main.go:141] libmachine: (addons-972916) DBG | <network>
	I0520 11:52:20.277565  860889 main.go:141] libmachine: (addons-972916) DBG |   <name>mk-addons-972916</name>
	I0520 11:52:20.277574  860889 main.go:141] libmachine: (addons-972916) DBG |   <dns enable='no'/>
	I0520 11:52:20.277615  860889 main.go:141] libmachine: (addons-972916) DBG |   
	I0520 11:52:20.277637  860889 main.go:141] libmachine: (addons-972916) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 11:52:20.277649  860889 main.go:141] libmachine: (addons-972916) DBG |     <dhcp>
	I0520 11:52:20.277658  860889 main.go:141] libmachine: (addons-972916) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 11:52:20.277672  860889 main.go:141] libmachine: (addons-972916) DBG |     </dhcp>
	I0520 11:52:20.277684  860889 main.go:141] libmachine: (addons-972916) DBG |   </ip>
	I0520 11:52:20.277692  860889 main.go:141] libmachine: (addons-972916) DBG |   
	I0520 11:52:20.277704  860889 main.go:141] libmachine: (addons-972916) DBG | </network>
	I0520 11:52:20.277717  860889 main.go:141] libmachine: (addons-972916) DBG | 
	I0520 11:52:20.283015  860889 main.go:141] libmachine: (addons-972916) DBG | trying to create private KVM network mk-addons-972916 192.168.39.0/24...
	I0520 11:52:20.345570  860889 main.go:141] libmachine: (addons-972916) DBG | private KVM network mk-addons-972916 192.168.39.0/24 created
	I0520 11:52:20.345604  860889 main.go:141] libmachine: (addons-972916) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916 ...
	I0520 11:52:20.345624  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:20.345532  860911 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 11:52:20.345650  860889 main.go:141] libmachine: (addons-972916) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 11:52:20.345681  860889 main.go:141] libmachine: (addons-972916) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 11:52:20.594157  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:20.594007  860911 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa...
	I0520 11:52:20.812330  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:20.812152  860911 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/addons-972916.rawdisk...
	I0520 11:52:20.812367  860889 main.go:141] libmachine: (addons-972916) DBG | Writing magic tar header
	I0520 11:52:20.812387  860889 main.go:141] libmachine: (addons-972916) DBG | Writing SSH key tar header
	I0520 11:52:20.812405  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:20.812313  860911 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916 ...
	I0520 11:52:20.812493  860889 main.go:141] libmachine: (addons-972916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916
	I0520 11:52:20.812522  860889 main.go:141] libmachine: (addons-972916) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916 (perms=drwx------)
	I0520 11:52:20.812534  860889 main.go:141] libmachine: (addons-972916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 11:52:20.812547  860889 main.go:141] libmachine: (addons-972916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 11:52:20.812566  860889 main.go:141] libmachine: (addons-972916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 11:52:20.812574  860889 main.go:141] libmachine: (addons-972916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 11:52:20.812580  860889 main.go:141] libmachine: (addons-972916) DBG | Checking permissions on dir: /home/jenkins
	I0520 11:52:20.812591  860889 main.go:141] libmachine: (addons-972916) DBG | Checking permissions on dir: /home
	I0520 11:52:20.812604  860889 main.go:141] libmachine: (addons-972916) DBG | Skipping /home - not owner
	I0520 11:52:20.812618  860889 main.go:141] libmachine: (addons-972916) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 11:52:20.812635  860889 main.go:141] libmachine: (addons-972916) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 11:52:20.812641  860889 main.go:141] libmachine: (addons-972916) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 11:52:20.812651  860889 main.go:141] libmachine: (addons-972916) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 11:52:20.812664  860889 main.go:141] libmachine: (addons-972916) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 11:52:20.812746  860889 main.go:141] libmachine: (addons-972916) Creating domain...
	I0520 11:52:20.813598  860889 main.go:141] libmachine: (addons-972916) define libvirt domain using xml: 
	I0520 11:52:20.813608  860889 main.go:141] libmachine: (addons-972916) <domain type='kvm'>
	I0520 11:52:20.813614  860889 main.go:141] libmachine: (addons-972916)   <name>addons-972916</name>
	I0520 11:52:20.813618  860889 main.go:141] libmachine: (addons-972916)   <memory unit='MiB'>4000</memory>
	I0520 11:52:20.813623  860889 main.go:141] libmachine: (addons-972916)   <vcpu>2</vcpu>
	I0520 11:52:20.813631  860889 main.go:141] libmachine: (addons-972916)   <features>
	I0520 11:52:20.813636  860889 main.go:141] libmachine: (addons-972916)     <acpi/>
	I0520 11:52:20.813640  860889 main.go:141] libmachine: (addons-972916)     <apic/>
	I0520 11:52:20.813645  860889 main.go:141] libmachine: (addons-972916)     <pae/>
	I0520 11:52:20.813652  860889 main.go:141] libmachine: (addons-972916)     
	I0520 11:52:20.813657  860889 main.go:141] libmachine: (addons-972916)   </features>
	I0520 11:52:20.813662  860889 main.go:141] libmachine: (addons-972916)   <cpu mode='host-passthrough'>
	I0520 11:52:20.813667  860889 main.go:141] libmachine: (addons-972916)   
	I0520 11:52:20.813679  860889 main.go:141] libmachine: (addons-972916)   </cpu>
	I0520 11:52:20.813687  860889 main.go:141] libmachine: (addons-972916)   <os>
	I0520 11:52:20.813691  860889 main.go:141] libmachine: (addons-972916)     <type>hvm</type>
	I0520 11:52:20.813699  860889 main.go:141] libmachine: (addons-972916)     <boot dev='cdrom'/>
	I0520 11:52:20.813703  860889 main.go:141] libmachine: (addons-972916)     <boot dev='hd'/>
	I0520 11:52:20.813738  860889 main.go:141] libmachine: (addons-972916)     <bootmenu enable='no'/>
	I0520 11:52:20.813762  860889 main.go:141] libmachine: (addons-972916)   </os>
	I0520 11:52:20.813772  860889 main.go:141] libmachine: (addons-972916)   <devices>
	I0520 11:52:20.813783  860889 main.go:141] libmachine: (addons-972916)     <disk type='file' device='cdrom'>
	I0520 11:52:20.813815  860889 main.go:141] libmachine: (addons-972916)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/boot2docker.iso'/>
	I0520 11:52:20.813829  860889 main.go:141] libmachine: (addons-972916)       <target dev='hdc' bus='scsi'/>
	I0520 11:52:20.813852  860889 main.go:141] libmachine: (addons-972916)       <readonly/>
	I0520 11:52:20.813871  860889 main.go:141] libmachine: (addons-972916)     </disk>
	I0520 11:52:20.813877  860889 main.go:141] libmachine: (addons-972916)     <disk type='file' device='disk'>
	I0520 11:52:20.813885  860889 main.go:141] libmachine: (addons-972916)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 11:52:20.813894  860889 main.go:141] libmachine: (addons-972916)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/addons-972916.rawdisk'/>
	I0520 11:52:20.813910  860889 main.go:141] libmachine: (addons-972916)       <target dev='hda' bus='virtio'/>
	I0520 11:52:20.813918  860889 main.go:141] libmachine: (addons-972916)     </disk>
	I0520 11:52:20.813923  860889 main.go:141] libmachine: (addons-972916)     <interface type='network'>
	I0520 11:52:20.813931  860889 main.go:141] libmachine: (addons-972916)       <source network='mk-addons-972916'/>
	I0520 11:52:20.813936  860889 main.go:141] libmachine: (addons-972916)       <model type='virtio'/>
	I0520 11:52:20.813960  860889 main.go:141] libmachine: (addons-972916)     </interface>
	I0520 11:52:20.813978  860889 main.go:141] libmachine: (addons-972916)     <interface type='network'>
	I0520 11:52:20.813993  860889 main.go:141] libmachine: (addons-972916)       <source network='default'/>
	I0520 11:52:20.814005  860889 main.go:141] libmachine: (addons-972916)       <model type='virtio'/>
	I0520 11:52:20.814018  860889 main.go:141] libmachine: (addons-972916)     </interface>
	I0520 11:52:20.814036  860889 main.go:141] libmachine: (addons-972916)     <serial type='pty'>
	I0520 11:52:20.814049  860889 main.go:141] libmachine: (addons-972916)       <target port='0'/>
	I0520 11:52:20.814060  860889 main.go:141] libmachine: (addons-972916)     </serial>
	I0520 11:52:20.814070  860889 main.go:141] libmachine: (addons-972916)     <console type='pty'>
	I0520 11:52:20.814082  860889 main.go:141] libmachine: (addons-972916)       <target type='serial' port='0'/>
	I0520 11:52:20.814118  860889 main.go:141] libmachine: (addons-972916)     </console>
	I0520 11:52:20.814144  860889 main.go:141] libmachine: (addons-972916)     <rng model='virtio'>
	I0520 11:52:20.814170  860889 main.go:141] libmachine: (addons-972916)       <backend model='random'>/dev/random</backend>
	I0520 11:52:20.814180  860889 main.go:141] libmachine: (addons-972916)     </rng>
	I0520 11:52:20.814190  860889 main.go:141] libmachine: (addons-972916)     
	I0520 11:52:20.814198  860889 main.go:141] libmachine: (addons-972916)     
	I0520 11:52:20.814209  860889 main.go:141] libmachine: (addons-972916)   </devices>
	I0520 11:52:20.814219  860889 main.go:141] libmachine: (addons-972916) </domain>
	I0520 11:52:20.814239  860889 main.go:141] libmachine: (addons-972916) 
	I0520 11:52:20.819722  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:40:2e:2f in network default
	I0520 11:52:20.820132  860889 main.go:141] libmachine: (addons-972916) Ensuring networks are active...
	I0520 11:52:20.820157  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:20.820740  860889 main.go:141] libmachine: (addons-972916) Ensuring network default is active
	I0520 11:52:20.821030  860889 main.go:141] libmachine: (addons-972916) Ensuring network mk-addons-972916 is active
	I0520 11:52:20.821454  860889 main.go:141] libmachine: (addons-972916) Getting domain xml...
	I0520 11:52:20.822044  860889 main.go:141] libmachine: (addons-972916) Creating domain...
	I0520 11:52:22.168795  860889 main.go:141] libmachine: (addons-972916) Waiting to get IP...
	I0520 11:52:22.169645  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:22.170052  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:22.170080  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:22.170025  860911 retry.go:31] will retry after 294.121091ms: waiting for machine to come up
	I0520 11:52:22.465517  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:22.465973  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:22.465995  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:22.465950  860911 retry.go:31] will retry after 360.400143ms: waiting for machine to come up
	I0520 11:52:22.827464  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:22.827919  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:22.827944  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:22.827888  860911 retry.go:31] will retry after 356.272157ms: waiting for machine to come up
	I0520 11:52:23.185381  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:23.185762  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:23.185787  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:23.185734  860911 retry.go:31] will retry after 397.296609ms: waiting for machine to come up
	I0520 11:52:23.585275  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:23.585709  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:23.585738  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:23.585670  860911 retry.go:31] will retry after 511.110516ms: waiting for machine to come up
	I0520 11:52:24.098297  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:24.098704  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:24.098733  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:24.098644  860911 retry.go:31] will retry after 858.907136ms: waiting for machine to come up
	I0520 11:52:24.958616  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:24.959055  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:24.959085  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:24.958997  860911 retry.go:31] will retry after 745.571208ms: waiting for machine to come up
	I0520 11:52:25.706525  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:25.706842  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:25.706895  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:25.706793  860911 retry.go:31] will retry after 1.030066937s: waiting for machine to come up
	I0520 11:52:26.738952  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:26.739277  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:26.739302  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:26.739228  860911 retry.go:31] will retry after 1.159564436s: waiting for machine to come up
	I0520 11:52:27.900010  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:27.900373  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:27.900396  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:27.900324  860911 retry.go:31] will retry after 1.458275911s: waiting for machine to come up
	I0520 11:52:29.361139  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:29.361638  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:29.361665  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:29.361604  860911 retry.go:31] will retry after 2.73973386s: waiting for machine to come up
	I0520 11:52:32.104335  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:32.104787  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:32.104817  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:32.104734  860911 retry.go:31] will retry after 3.325539916s: waiting for machine to come up
	I0520 11:52:35.431806  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:35.432185  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:35.432211  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:35.432124  860911 retry.go:31] will retry after 3.222316658s: waiting for machine to come up
	I0520 11:52:38.658372  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:38.658673  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find current IP address of domain addons-972916 in network mk-addons-972916
	I0520 11:52:38.658698  860889 main.go:141] libmachine: (addons-972916) DBG | I0520 11:52:38.658625  860911 retry.go:31] will retry after 4.577434116s: waiting for machine to come up
	I0520 11:52:43.240264  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.240700  860889 main.go:141] libmachine: (addons-972916) Found IP for machine: 192.168.39.206
	I0520 11:52:43.240722  860889 main.go:141] libmachine: (addons-972916) Reserving static IP address...
	I0520 11:52:43.240733  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has current primary IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.241126  860889 main.go:141] libmachine: (addons-972916) DBG | unable to find host DHCP lease matching {name: "addons-972916", mac: "52:54:00:df:4b:82", ip: "192.168.39.206"} in network mk-addons-972916
	I0520 11:52:43.385612  860889 main.go:141] libmachine: (addons-972916) DBG | Getting to WaitForSSH function...
	I0520 11:52:43.385642  860889 main.go:141] libmachine: (addons-972916) Reserved static IP address: 192.168.39.206
	I0520 11:52:43.385663  860889 main.go:141] libmachine: (addons-972916) Waiting for SSH to be available...
	I0520 11:52:43.388624  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.389131  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:43.389168  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.389298  860889 main.go:141] libmachine: (addons-972916) DBG | Using SSH client type: external
	I0520 11:52:43.389343  860889 main.go:141] libmachine: (addons-972916) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa (-rw-------)
	I0520 11:52:43.389384  860889 main.go:141] libmachine: (addons-972916) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:52:43.389402  860889 main.go:141] libmachine: (addons-972916) DBG | About to run SSH command:
	I0520 11:52:43.389423  860889 main.go:141] libmachine: (addons-972916) DBG | exit 0
	I0520 11:52:43.510571  860889 main.go:141] libmachine: (addons-972916) DBG | SSH cmd err, output: <nil>: 
	I0520 11:52:43.510776  860889 main.go:141] libmachine: (addons-972916) KVM machine creation complete!
	I0520 11:52:43.511133  860889 main.go:141] libmachine: (addons-972916) Calling .GetConfigRaw
	I0520 11:52:43.520093  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:52:43.520370  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:52:43.520558  860889 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 11:52:43.520576  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:52:43.521770  860889 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 11:52:43.521787  860889 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 11:52:43.521795  860889 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 11:52:43.521803  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:43.523953  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.524332  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:43.524363  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.524436  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:43.524615  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:43.524795  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:43.524919  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:43.525082  860889 main.go:141] libmachine: Using SSH client type: native
	I0520 11:52:43.525274  860889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0520 11:52:43.525288  860889 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 11:52:43.629916  860889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:52:43.629970  860889 main.go:141] libmachine: Detecting the provisioner...
	I0520 11:52:43.629981  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:43.632673  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.633119  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:43.633143  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.633292  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:43.633523  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:43.633688  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:43.633911  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:43.634107  860889 main.go:141] libmachine: Using SSH client type: native
	I0520 11:52:43.634317  860889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0520 11:52:43.634330  860889 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 11:52:43.739622  860889 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 11:52:43.739711  860889 main.go:141] libmachine: found compatible host: buildroot
	I0520 11:52:43.739723  860889 main.go:141] libmachine: Provisioning with buildroot...
	I0520 11:52:43.739732  860889 main.go:141] libmachine: (addons-972916) Calling .GetMachineName
	I0520 11:52:43.740020  860889 buildroot.go:166] provisioning hostname "addons-972916"
	I0520 11:52:43.740061  860889 main.go:141] libmachine: (addons-972916) Calling .GetMachineName
	I0520 11:52:43.740273  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:43.742796  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.743222  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:43.743248  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.743417  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:43.743629  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:43.743780  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:43.743978  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:43.744159  860889 main.go:141] libmachine: Using SSH client type: native
	I0520 11:52:43.744367  860889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0520 11:52:43.744381  860889 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-972916 && echo "addons-972916" | sudo tee /etc/hostname
	I0520 11:52:43.861030  860889 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-972916
	
	I0520 11:52:43.861077  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:43.864017  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.864408  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:43.864439  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.864586  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:43.864819  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:43.865005  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:43.865168  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:43.865298  860889 main.go:141] libmachine: Using SSH client type: native
	I0520 11:52:43.865513  860889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0520 11:52:43.865531  860889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-972916' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-972916/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-972916' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:52:43.983667  860889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:52:43.983699  860889 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 11:52:43.983727  860889 buildroot.go:174] setting up certificates
	I0520 11:52:43.983743  860889 provision.go:84] configureAuth start
	I0520 11:52:43.983758  860889 main.go:141] libmachine: (addons-972916) Calling .GetMachineName
	I0520 11:52:43.984073  860889 main.go:141] libmachine: (addons-972916) Calling .GetIP
	I0520 11:52:43.986630  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.987035  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:43.987064  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.987171  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:43.989196  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.989428  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:43.989453  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:43.989568  860889 provision.go:143] copyHostCerts
	I0520 11:52:43.989644  860889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 11:52:43.989764  860889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 11:52:43.989835  860889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 11:52:43.989896  860889 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.addons-972916 san=[127.0.0.1 192.168.39.206 addons-972916 localhost minikube]
	I0520 11:52:44.057145  860889 provision.go:177] copyRemoteCerts
	I0520 11:52:44.057228  860889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:52:44.057266  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:44.059945  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.060233  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.060263  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.060403  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:44.060564  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:44.060747  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:44.060852  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:52:44.144733  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:52:44.168897  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 11:52:44.192928  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:52:44.217744  860889 provision.go:87] duration metric: took 233.98594ms to configureAuth
	I0520 11:52:44.217772  860889 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:52:44.217933  860889 config.go:182] Loaded profile config "addons-972916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:52:44.218042  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:44.220703  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.221034  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.221066  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.221222  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:44.221416  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:44.221609  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:44.221755  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:44.221956  860889 main.go:141] libmachine: Using SSH client type: native
	I0520 11:52:44.222139  860889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0520 11:52:44.222155  860889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:52:44.735464  860889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:52:44.735495  860889 main.go:141] libmachine: Checking connection to Docker...
	I0520 11:52:44.735504  860889 main.go:141] libmachine: (addons-972916) Calling .GetURL
	I0520 11:52:44.736748  860889 main.go:141] libmachine: (addons-972916) DBG | Using libvirt version 6000000
	I0520 11:52:44.739484  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.740168  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.740197  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.740376  860889 main.go:141] libmachine: Docker is up and running!
	I0520 11:52:44.740398  860889 main.go:141] libmachine: Reticulating splines...
	I0520 11:52:44.740408  860889 client.go:171] duration metric: took 24.599008921s to LocalClient.Create
	I0520 11:52:44.740442  860889 start.go:167] duration metric: took 24.599096032s to libmachine.API.Create "addons-972916"
	I0520 11:52:44.740454  860889 start.go:293] postStartSetup for "addons-972916" (driver="kvm2")
	I0520 11:52:44.740468  860889 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:52:44.740494  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:52:44.740757  860889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:52:44.740788  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:44.742737  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.743050  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.743076  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.743219  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:44.743372  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:44.743523  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:44.743660  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:52:44.824980  860889 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:52:44.829177  860889 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:52:44.829200  860889 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 11:52:44.829259  860889 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 11:52:44.829281  860889 start.go:296] duration metric: took 88.818439ms for postStartSetup
	I0520 11:52:44.829320  860889 main.go:141] libmachine: (addons-972916) Calling .GetConfigRaw
	I0520 11:52:44.859765  860889 main.go:141] libmachine: (addons-972916) Calling .GetIP
	I0520 11:52:44.862188  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.862543  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.862568  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.862921  860889 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/config.json ...
	I0520 11:52:44.863078  860889 start.go:128] duration metric: took 24.738421802s to createHost
	I0520 11:52:44.863104  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:44.865510  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.865829  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.865854  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.865971  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:44.866171  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:44.866319  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:44.866437  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:44.866549  860889 main.go:141] libmachine: Using SSH client type: native
	I0520 11:52:44.866712  860889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0520 11:52:44.866733  860889 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 11:52:44.971190  860889 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205964.948662219
	
	I0520 11:52:44.971216  860889 fix.go:216] guest clock: 1716205964.948662219
	I0520 11:52:44.971226  860889 fix.go:229] Guest: 2024-05-20 11:52:44.948662219 +0000 UTC Remote: 2024-05-20 11:52:44.863091073 +0000 UTC m=+24.834725756 (delta=85.571146ms)
	I0520 11:52:44.971258  860889 fix.go:200] guest clock delta is within tolerance: 85.571146ms
	I0520 11:52:44.971265  860889 start.go:83] releasing machines lock for "addons-972916", held for 24.846692048s
	I0520 11:52:44.971294  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:52:44.971552  860889 main.go:141] libmachine: (addons-972916) Calling .GetIP
	I0520 11:52:44.974279  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.974593  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.974624  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.974785  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:52:44.975275  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:52:44.975459  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:52:44.975551  860889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:52:44.975607  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:44.975709  860889 ssh_runner.go:195] Run: cat /version.json
	I0520 11:52:44.975737  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:52:44.978681  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.978911  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.979112  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.979156  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.979254  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:44.979364  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:44.979391  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:44.979444  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:44.979551  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:52:44.979636  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:44.979692  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:52:44.979889  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:52:44.979930  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:52:44.980038  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	W0520 11:52:45.076445  860889 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:52:45.076547  860889 ssh_runner.go:195] Run: systemctl --version
	I0520 11:52:45.082645  860889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:52:45.244358  860889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:52:45.251325  860889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:52:45.251391  860889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:52:45.266739  860889 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:52:45.266764  860889 start.go:494] detecting cgroup driver to use...
	I0520 11:52:45.266835  860889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:52:45.281219  860889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:52:45.294117  860889 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:52:45.294159  860889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:52:45.306694  860889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:52:45.319965  860889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:52:45.435797  860889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:52:45.585048  860889 docker.go:233] disabling docker service ...
	I0520 11:52:45.585126  860889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:52:45.599492  860889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:52:45.612298  860889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:52:45.741098  860889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:52:45.882962  860889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:52:45.898976  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:52:45.919269  860889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:52:45.919327  860889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:52:45.931273  860889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:52:45.931344  860889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:52:45.943771  860889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:52:45.955866  860889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:52:45.968319  860889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:52:45.980688  860889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:52:45.992792  860889 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:52:46.011514  860889 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:52:46.024302  860889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:52:46.035306  860889 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:52:46.035358  860889 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:52:46.050480  860889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:52:46.060104  860889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:52:46.186447  860889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:52:46.312756  860889 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:52:46.312857  860889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:52:46.317817  860889 start.go:562] Will wait 60s for crictl version
	I0520 11:52:46.317888  860889 ssh_runner.go:195] Run: which crictl
	I0520 11:52:46.321586  860889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:52:46.365710  860889 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:52:46.365799  860889 ssh_runner.go:195] Run: crio --version
	I0520 11:52:46.392792  860889 ssh_runner.go:195] Run: crio --version
	I0520 11:52:46.421176  860889 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:52:46.422354  860889 main.go:141] libmachine: (addons-972916) Calling .GetIP
	I0520 11:52:46.424976  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:46.425334  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:52:46.425362  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:52:46.425559  860889 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:52:46.429487  860889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:52:46.441938  860889 kubeadm.go:877] updating cluster {Name:addons-972916 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-972916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:52:46.442068  860889 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:52:46.442111  860889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:52:46.473745  860889 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:52:46.473806  860889 ssh_runner.go:195] Run: which lz4
	I0520 11:52:46.477767  860889 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 11:52:46.481923  860889 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:52:46.481956  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:52:47.763975  860889 crio.go:462] duration metric: took 1.286241488s to copy over tarball
	I0520 11:52:47.764054  860889 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:52:49.958026  860889 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.193931182s)
	I0520 11:52:49.958069  860889 crio.go:469] duration metric: took 2.194064545s to extract the tarball
	I0520 11:52:49.958079  860889 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:52:49.995022  860889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:52:50.038976  860889 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:52:50.039003  860889 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:52:50.039012  860889 kubeadm.go:928] updating node { 192.168.39.206 8443 v1.30.1 crio true true} ...
	I0520 11:52:50.039144  860889 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-972916 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-972916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:52:50.039210  860889 ssh_runner.go:195] Run: crio config
	I0520 11:52:50.095495  860889 cni.go:84] Creating CNI manager for ""
	I0520 11:52:50.095514  860889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:50.095531  860889 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:52:50.095554  860889 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-972916 NodeName:addons-972916 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:52:50.095726  860889 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-972916"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:52:50.095797  860889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:52:50.106497  860889 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:52:50.106559  860889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:52:50.117023  860889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0520 11:52:50.135390  860889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:52:50.152171  860889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0520 11:52:50.168321  860889 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0520 11:52:50.172080  860889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:52:50.183921  860889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:52:50.323718  860889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:52:50.340709  860889 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916 for IP: 192.168.39.206
	I0520 11:52:50.340733  860889 certs.go:194] generating shared ca certs ...
	I0520 11:52:50.340751  860889 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:50.340910  860889 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 11:52:50.567773  860889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt ...
	I0520 11:52:50.567805  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt: {Name:mk442975d521c93eb25a132ea7955bba8a837dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:50.567961  860889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key ...
	I0520 11:52:50.567972  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key: {Name:mk33cd6f9665bfcbee2812ac7b129ccd778052f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:50.568044  860889 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 11:52:50.751328  860889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt ...
	I0520 11:52:50.751359  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt: {Name:mk8afa96332688a20f548c0f061d06261ed96aa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:50.751512  860889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key ...
	I0520 11:52:50.751526  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key: {Name:mkdb7fd86faa0ae1a05f7d61695d9dab6eea3e9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:50.751584  860889 certs.go:256] generating profile certs ...
	I0520 11:52:50.751646  860889 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/client.key
	I0520 11:52:50.751660  860889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/client.crt with IP's: []
	I0520 11:52:51.619301  860889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/client.crt ...
	I0520 11:52:51.619340  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/client.crt: {Name:mk63d9f0c978350bf2f8947796d6fc0f6d02a813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:51.619541  860889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/client.key ...
	I0520 11:52:51.619557  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/client.key: {Name:mkfd5bdb70b3364c130ae6d19be67ba2e637782a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:51.619655  860889 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.key.6e5baf07
	I0520 11:52:51.619676  860889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.crt.6e5baf07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I0520 11:52:51.692712  860889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.crt.6e5baf07 ...
	I0520 11:52:51.692747  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.crt.6e5baf07: {Name:mkd18cb7de690b3892cee6b6cd1b2ea7dae0542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:51.692916  860889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.key.6e5baf07 ...
	I0520 11:52:51.692936  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.key.6e5baf07: {Name:mkf284f2685810c0f5f9aaa5d71c76e14105aee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:51.693059  860889 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.crt.6e5baf07 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.crt
	I0520 11:52:51.693143  860889 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.key.6e5baf07 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.key
	I0520 11:52:51.693194  860889 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/proxy-client.key
	I0520 11:52:51.693213  860889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/proxy-client.crt with IP's: []
	I0520 11:52:51.838049  860889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/proxy-client.crt ...
	I0520 11:52:51.838081  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/proxy-client.crt: {Name:mkfd38606c839734c908558c42a9697f78b51f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:51.838250  860889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/proxy-client.key ...
	I0520 11:52:51.838268  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/proxy-client.key: {Name:mk64c810f06b7bc6963699f45b0055e069c8e62a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:51.838473  860889 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:52:51.838511  860889 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:52:51.838539  860889 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:52:51.838564  860889 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 11:52:51.839270  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:52:51.868163  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 11:52:51.891877  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:52:51.914984  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 11:52:51.939021  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 11:52:51.961344  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 11:52:52.000081  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:52:52.022241  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/addons-972916/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:52:52.044858  860889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:52:52.067277  860889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:52:52.083343  860889 ssh_runner.go:195] Run: openssl version
	I0520 11:52:52.089698  860889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:52:52.100278  860889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:52:52.104664  860889 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:52:52.104729  860889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:52:52.110414  860889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:52:52.121200  860889 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:52:52.125225  860889 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 11:52:52.125277  860889 kubeadm.go:391] StartCluster: {Name:addons-972916 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-972916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:52:52.125372  860889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:52:52.125440  860889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:52:52.164330  860889 cri.go:89] found id: ""
	I0520 11:52:52.164410  860889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 11:52:52.174527  860889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:52:52.184013  860889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:52.193541  860889 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:52.193564  860889 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:52.193598  860889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:52.202478  860889 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:52.202539  860889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:52.211921  860889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:52.221716  860889 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:52.221778  860889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:52.231216  860889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:52.240563  860889 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:52.240614  860889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:52.249852  860889 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:52.258671  860889 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:52.258718  860889 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:52.267795  860889 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:52.324767  860889 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:52:52.324825  860889 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:52.454754  860889 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:52.454958  860889 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:52.455143  860889 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:52.664831  860889 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:52.798721  860889 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:52.798885  860889 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:52.798978  860889 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:52.844296  860889 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 11:52:52.948910  860889 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 11:52:53.162797  860889 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 11:52:53.455043  860889 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 11:52:53.607984  860889 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 11:52:53.608185  860889 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-972916 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0520 11:52:53.881074  860889 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 11:52:53.881287  860889 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-972916 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0520 11:52:54.018171  860889 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 11:52:54.283226  860889 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 11:52:54.420151  860889 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 11:52:54.420255  860889 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:54.564914  860889 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:54.881444  860889 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:52:55.056886  860889 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:55.231301  860889 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:55.326986  860889 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:55.327619  860889 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:55.329882  860889 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:55.331600  860889 out.go:204]   - Booting up control plane ...
	I0520 11:52:55.331710  860889 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:55.331801  860889 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:55.331859  860889 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:55.351979  860889 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:55.352100  860889 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:55.352180  860889 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:55.481281  860889 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:52:55.481365  860889 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:52:55.982940  860889 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.002303ms
	I0520 11:52:55.983063  860889 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:53:00.981475  860889 kubeadm.go:309] [api-check] The API server is healthy after 5.001314806s
	I0520 11:53:01.000519  860889 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:53:01.523885  860889 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:53:01.560973  860889 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:53:01.561241  860889 kubeadm.go:309] [mark-control-plane] Marking the node addons-972916 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:53:01.582955  860889 kubeadm.go:309] [bootstrap-token] Using token: sx9y2y.ckcomaz7zu73i92b
	I0520 11:53:01.584561  860889 out.go:204]   - Configuring RBAC rules ...
	I0520 11:53:01.584671  860889 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:53:01.591524  860889 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:53:01.599793  860889 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:53:01.605963  860889 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:53:01.611141  860889 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:53:01.615308  860889 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:53:01.713854  860889 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:53:02.161615  860889 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:53:02.712719  860889 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:53:02.714786  860889 kubeadm.go:309] 
	I0520 11:53:02.714889  860889 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:53:02.714898  860889 kubeadm.go:309] 
	I0520 11:53:02.714989  860889 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:53:02.714999  860889 kubeadm.go:309] 
	I0520 11:53:02.715039  860889 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:53:02.715111  860889 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:53:02.715205  860889 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:53:02.715235  860889 kubeadm.go:309] 
	I0520 11:53:02.715315  860889 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:53:02.715326  860889 kubeadm.go:309] 
	I0520 11:53:02.715392  860889 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:53:02.715402  860889 kubeadm.go:309] 
	I0520 11:53:02.715473  860889 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:53:02.715577  860889 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:53:02.715700  860889 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:53:02.715720  860889 kubeadm.go:309] 
	I0520 11:53:02.715838  860889 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:53:02.715955  860889 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:53:02.715968  860889 kubeadm.go:309] 
	I0520 11:53:02.716083  860889 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sx9y2y.ckcomaz7zu73i92b \
	I0520 11:53:02.716247  860889 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 \
	I0520 11:53:02.716282  860889 kubeadm.go:309] 	--control-plane 
	I0520 11:53:02.716291  860889 kubeadm.go:309] 
	I0520 11:53:02.716409  860889 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:53:02.716419  860889 kubeadm.go:309] 
	I0520 11:53:02.716532  860889 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sx9y2y.ckcomaz7zu73i92b \
	I0520 11:53:02.716674  860889 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 
	I0520 11:53:02.717068  860889 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:53:02.717106  860889 cni.go:84] Creating CNI manager for ""
	I0520 11:53:02.717119  860889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:53:02.718772  860889 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:53:02.719936  860889 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:53:02.731383  860889 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:53:02.748134  860889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:53:02.748282  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:02.748295  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-972916 minikube.k8s.io/updated_at=2024_05_20T11_53_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=addons-972916 minikube.k8s.io/primary=true
	I0520 11:53:02.781221  860889 ops.go:34] apiserver oom_adj: -16
	I0520 11:53:02.914424  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:03.415358  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:03.914858  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:04.414672  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:04.914706  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:05.415362  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:05.914624  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:06.414491  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:06.915457  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:07.414574  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:07.915180  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:08.414466  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:08.914585  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:09.414639  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:09.914470  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:10.414729  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:10.915134  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:11.415366  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:11.915297  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:12.415335  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:12.914755  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:13.415184  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:13.915429  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:14.415156  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:14.914801  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:15.415080  860889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:53:15.503063  860889 kubeadm.go:1107] duration metric: took 12.754850406s to wait for elevateKubeSystemPrivileges
	W0520 11:53:15.503106  860889 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:53:15.503114  860889 kubeadm.go:393] duration metric: took 23.377841558s to StartCluster
	I0520 11:53:15.503134  860889 settings.go:142] acquiring lock: {Name:mk4281d9011919f2beed93cad1a6e2e67e70984f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:53:15.503274  860889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 11:53:15.503638  860889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/kubeconfig: {Name:mk53b7329389b23289bbec52de9b56d2ade0e6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:53:15.503842  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 11:53:15.503847  860889 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:53:15.505934  860889 out.go:177] * Verifying Kubernetes components...
	I0520 11:53:15.503923  860889 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 11:53:15.504614  860889 config.go:182] Loaded profile config "addons-972916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:53:15.507890  860889 addons.go:69] Setting cloud-spanner=true in profile "addons-972916"
	I0520 11:53:15.507904  860889 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-972916"
	I0520 11:53:15.507921  860889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:53:15.507928  860889 addons.go:69] Setting default-storageclass=true in profile "addons-972916"
	I0520 11:53:15.507935  860889 addons.go:69] Setting ingress-dns=true in profile "addons-972916"
	I0520 11:53:15.507940  860889 addons.go:69] Setting gcp-auth=true in profile "addons-972916"
	I0520 11:53:15.507950  860889 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-972916"
	I0520 11:53:15.507960  860889 mustload.go:65] Loading cluster: addons-972916
	I0520 11:53:15.507967  860889 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-972916"
	I0520 11:53:15.507930  860889 addons.go:69] Setting ingress=true in profile "addons-972916"
	I0520 11:53:15.507982  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.507992  860889 addons.go:69] Setting inspektor-gadget=true in profile "addons-972916"
	I0520 11:53:15.508008  860889 addons.go:234] Setting addon inspektor-gadget=true in "addons-972916"
	I0520 11:53:15.508013  860889 addons.go:69] Setting registry=true in profile "addons-972916"
	I0520 11:53:15.508041  860889 addons.go:234] Setting addon registry=true in "addons-972916"
	I0520 11:53:15.508056  860889 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-972916"
	I0520 11:53:15.508073  860889 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-972916"
	I0520 11:53:15.508077  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.508147  860889 config.go:182] Loaded profile config "addons-972916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:53:15.508407  860889 addons.go:69] Setting volumesnapshots=true in profile "addons-972916"
	I0520 11:53:15.508434  860889 addons.go:234] Setting addon volumesnapshots=true in "addons-972916"
	I0520 11:53:15.508433  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.508444  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.508453  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.508465  860889 addons.go:69] Setting metrics-server=true in profile "addons-972916"
	I0520 11:53:15.508468  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.507960  860889 addons.go:234] Setting addon ingress-dns=true in "addons-972916"
	I0520 11:53:15.508495  860889 addons.go:234] Setting addon metrics-server=true in "addons-972916"
	I0520 11:53:15.508511  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.508525  860889 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-972916"
	I0520 11:53:15.508545  860889 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-972916"
	I0520 11:53:15.508469  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.508566  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.508548  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.507988  860889 addons.go:234] Setting addon ingress=true in "addons-972916"
	I0520 11:53:15.508568  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.508042  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.508430  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.508457  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.508716  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.507934  860889 addons.go:234] Setting addon cloud-spanner=true in "addons-972916"
	I0520 11:53:15.508719  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.508800  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.508828  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.508047  860889 addons.go:69] Setting storage-provisioner=true in profile "addons-972916"
	I0520 11:53:15.508898  860889 addons.go:234] Setting addon storage-provisioner=true in "addons-972916"
	I0520 11:53:15.507899  860889 addons.go:69] Setting helm-tiller=true in profile "addons-972916"
	I0520 11:53:15.508925  860889 addons.go:234] Setting addon helm-tiller=true in "addons-972916"
	I0520 11:53:15.508927  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.508946  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.508969  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.507919  860889 addons.go:69] Setting yakd=true in profile "addons-972916"
	I0520 11:53:15.508986  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.508998  860889 addons.go:234] Setting addon yakd=true in "addons-972916"
	I0520 11:53:15.508519  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.509013  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.509221  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.509312  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.509326  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.509330  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.509333  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.509337  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.509340  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.509363  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.509382  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.509397  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.509572  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.509610  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.509677  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.509697  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.509713  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.509715  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.509728  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.509737  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.529924  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I0520 11:53:15.529922  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0520 11:53:15.530635  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.530684  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.531310  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.531335  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.531424  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.531443  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.531695  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.531705  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0520 11:53:15.531848  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.532132  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.532204  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0520 11:53:15.532249  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36729
	I0520 11:53:15.532208  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.532651  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.532724  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.532877  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.532923  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.533168  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.533187  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.533308  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.533335  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.533538  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.533697  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.533922  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.533939  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.534172  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.534213  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.534401  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.534437  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.538716  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.538781  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0520 11:53:15.543101  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.545002  860889 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-972916"
	I0520 11:53:15.545011  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.545059  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.545437  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.545487  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.546170  860889 addons.go:234] Setting addon default-storageclass=true in "addons-972916"
	I0520 11:53:15.546214  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.546640  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.546684  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.547583  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.547613  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.548187  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.548757  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.548805  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.553883  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0520 11:53:15.554018  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0520 11:53:15.554369  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.554551  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.554885  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.554908  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.555116  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.555139  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.555290  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.555458  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.555517  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.556047  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.556077  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.557283  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.565799  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 11:53:15.567420  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 11:53:15.568736  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 11:53:15.569814  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0520 11:53:15.570463  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.571262  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 11:53:15.571716  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.572777  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 11:53:15.572792  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.574042  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 11:53:15.575396  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 11:53:15.574488  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.576865  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 11:53:15.578145  860889 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 11:53:15.578168  860889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 11:53:15.578191  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.577583  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.578264  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.580856  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0520 11:53:15.581055  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0520 11:53:15.581252  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.581325  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.583081  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.583104  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.583119  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.583109  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.583296  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.583312  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.583396  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.583633  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.583651  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.583718  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0520 11:53:15.583739  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.583926  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.584039  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.584127  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.584646  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.584664  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.584718  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.584753  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.585104  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.585161  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0520 11:53:15.586164  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.586173  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.586211  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.587135  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.587188  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.588029  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.588049  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.588679  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.589172  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.589501  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:15.589878  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.589911  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.591424  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0520 11:53:15.591910  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.592176  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.593629  860889 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 11:53:15.592814  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.593207  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0520 11:53:15.595866  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36945
	I0520 11:53:15.596334  860889 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 11:53:15.596412  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.598176  860889 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 11:53:15.596884  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.598193  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 11:53:15.598213  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.596936  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42281
	I0520 11:53:15.597223  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0520 11:53:15.597586  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.599518  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.599539  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.599600  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.599821  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.600400  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0520 11:53:15.600534  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.600584  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.600598  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.600641  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.601107  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.601149  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.601178  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.601199  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.601182  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.601392  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.601501  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.601553  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.602180  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.602228  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.602273  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.602289  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.602549  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.602567  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.602588  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.604461  860889 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:53:15.603162  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.603197  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.603224  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0520 11:53:15.603380  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.603642  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.603673  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.603696  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.605692  860889 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:53:15.605706  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:53:15.605725  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.605789  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.605903  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.606758  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.606794  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.606913  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.607018  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.607255  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.607573  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.607591  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.609081  860889 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 11:53:15.608031  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.608104  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.608788  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.609625  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.610521  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.610788  860889 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 11:53:15.610803  860889 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 11:53:15.610822  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.611127  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.611151  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.609652  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.611378  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.611781  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.611813  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.612152  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.612306  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.613031  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34817
	I0520 11:53:15.613476  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.613682  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.614133  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.614150  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.614204  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.614217  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.614241  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.614382  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.614517  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.614631  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.615155  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.615745  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:15.615790  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:15.619587  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I0520 11:53:15.620198  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.620724  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.620746  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.621075  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.621246  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.622674  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0520 11:53:15.623239  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.623738  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.623755  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.624060  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.624228  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.625981  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.627831  860889 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 11:53:15.628982  860889 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 11:53:15.629001  860889 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 11:53:15.629029  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.626982  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0520 11:53:15.632695  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I0520 11:53:15.632895  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.632916  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.632932  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.633458  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38969
	I0520 11:53:15.633581  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.633655  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.633703  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.633848  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.634075  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.634093  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.634092  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.634510  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.634515  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.634624  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.634698  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.634714  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.635140  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.635174  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.635360  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.636054  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.636078  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.636836  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.637114  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.637177  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.637298  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.639477  860889 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 11:53:15.640591  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.640915  860889 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 11:53:15.642165  860889 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 11:53:15.642190  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 11:53:15.642211  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.643521  860889 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 11:53:15.642480  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I0520 11:53:15.643179  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43655
	I0520 11:53:15.644397  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I0520 11:53:15.644726  860889 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0520 11:53:15.645881  860889 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 11:53:15.645300  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.645350  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.645381  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.645590  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.646283  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.647070  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32795
	I0520 11:53:15.647319  860889 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0520 11:53:15.647336  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0520 11:53:15.647354  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.647357  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.647319  860889 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 11:53:15.647397  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 11:53:15.647409  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.647476  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.647494  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.647518  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.647673  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.648292  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.648396  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0520 11:53:15.648591  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.648614  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.648786  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.648805  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.649009  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.649102  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.649210  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.649257  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.649357  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.649373  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.649399  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.649743  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.649761  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.650040  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.650059  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.650131  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.650414  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.650418  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.650509  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.650662  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.650997  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.651196  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.651256  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.652927  860889 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 11:53:15.651835  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.652139  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.652860  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.653038  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.653178  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.653729  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.654383  860889 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 11:53:15.654395  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 11:53:15.654407  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.654434  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.656262  860889 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 11:53:15.654951  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.654957  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.655747  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.657455  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.657647  860889 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 11:53:15.657683  860889 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 11:53:15.657737  860889 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 11:53:15.658920  860889 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 11:53:15.657796  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.660228  860889 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:53:15.660249  860889 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:53:15.660266  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.658954  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.657953  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.658968  860889 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 11:53:15.658986  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.657948  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.659236  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.661885  860889 out.go:177]   - Using image docker.io/busybox:stable
	I0520 11:53:15.661997  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.662108  860889 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 11:53:15.663668  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 11:53:15.663686  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.662199  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.662334  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.662377  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.663829  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0520 11:53:15.662417  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.664027  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.662538  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.663069  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.664070  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.663602  860889 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 11:53:15.664089  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 11:53:15.664090  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.664104  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.664274  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.664303  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.664496  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.664525  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.664677  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.665322  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:15.665973  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:15.665994  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:15.666614  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:15.666833  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:15.667937  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.667966  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.668344  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.668368  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.668376  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.668398  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.668507  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.668654  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.668737  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.668741  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.668865  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.669027  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.669154  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.669291  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.669314  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.669320  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.669524  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:15.669569  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.669638  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:15.669832  860889 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:53:15.669842  860889 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:53:15.669852  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.669863  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:15.670017  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.670158  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	W0520 11:53:15.671786  860889 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39104->192.168.39.206:22: read: connection reset by peer
	I0520 11:53:15.671819  860889 retry.go:31] will retry after 347.569161ms: ssh: handshake failed: read tcp 192.168.39.1:39104->192.168.39.206:22: read: connection reset by peer
	I0520 11:53:15.672431  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.672780  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:15.672800  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:15.672914  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:15.673060  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:15.673167  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:15.673250  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:16.058685  860889 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 11:53:16.058722  860889 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 11:53:16.122452  860889 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 11:53:16.122477  860889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 11:53:16.139687  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 11:53:16.142755  860889 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 11:53:16.142779  860889 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 11:53:16.145221  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:53:16.162920  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 11:53:16.171844  860889 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 11:53:16.171866  860889 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 11:53:16.173407  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 11:53:16.177801  860889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:53:16.178011  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 11:53:16.190758  860889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:53:16.190777  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 11:53:16.193576  860889 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 11:53:16.193594  860889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 11:53:16.205899  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:53:16.213995  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 11:53:16.255985  860889 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0520 11:53:16.256010  860889 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0520 11:53:16.288914  860889 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 11:53:16.288939  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 11:53:16.299537  860889 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 11:53:16.299564  860889 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 11:53:16.314627  860889 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 11:53:16.314652  860889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 11:53:16.339611  860889 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 11:53:16.339641  860889 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 11:53:16.354568  860889 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 11:53:16.354589  860889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 11:53:16.379336  860889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:53:16.379370  860889 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:53:16.446507  860889 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 11:53:16.446536  860889 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0520 11:53:16.522579  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 11:53:16.532367  860889 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 11:53:16.532397  860889 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 11:53:16.545720  860889 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 11:53:16.545744  860889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 11:53:16.598459  860889 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 11:53:16.598502  860889 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 11:53:16.600603  860889 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 11:53:16.600622  860889 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 11:53:16.639921  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 11:53:16.650360  860889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:53:16.650379  860889 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:53:16.695468  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 11:53:16.737436  860889 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 11:53:16.737459  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 11:53:16.788411  860889 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 11:53:16.788447  860889 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 11:53:16.809134  860889 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 11:53:16.809161  860889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 11:53:16.843503  860889 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 11:53:16.843530  860889 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 11:53:16.854146  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:53:16.965867  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 11:53:16.970214  860889 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 11:53:16.970236  860889 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 11:53:17.029580  860889 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 11:53:17.029608  860889 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 11:53:17.131001  860889 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 11:53:17.131033  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 11:53:17.235138  860889 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 11:53:17.235167  860889 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 11:53:17.370502  860889 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 11:53:17.370532  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 11:53:17.503662  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 11:53:17.650860  860889 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 11:53:17.650897  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 11:53:17.657299  860889 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 11:53:17.657322  860889 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 11:53:17.940174  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 11:53:17.976622  860889 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 11:53:17.976656  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 11:53:18.308253  860889 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 11:53:18.308292  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 11:53:18.638538  860889 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 11:53:18.638577  860889 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 11:53:18.827891  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 11:53:19.071873  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.932140146s)
	I0520 11:53:19.071947  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:19.071961  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:19.072282  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:19.072301  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:19.072312  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:19.072311  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:19.072321  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:19.072669  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:19.072682  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:20.502554  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.357289572s)
	I0520 11:53:20.502639  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:20.502633  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.339674878s)
	I0520 11:53:20.502689  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:20.502654  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:20.502709  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:20.503073  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:20.503114  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:20.503121  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:20.503140  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:20.503149  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:20.503168  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:20.503181  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:20.503189  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:20.503156  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:20.503251  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:20.503421  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:20.503440  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:20.503557  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:20.503562  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:20.503581  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:22.685693  860889 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 11:53:22.685750  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:22.688913  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:22.689313  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:22.689346  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:22.689516  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:22.689779  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:22.689974  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:22.690157  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:23.148165  860889 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 11:53:23.366355  860889 addons.go:234] Setting addon gcp-auth=true in "addons-972916"
	I0520 11:53:23.366418  860889 host.go:66] Checking if "addons-972916" exists ...
	I0520 11:53:23.366867  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:23.366913  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:23.382349  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0520 11:53:23.382732  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:23.383301  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:23.383336  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:23.383773  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:23.384266  860889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:53:23.384293  860889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:53:23.399378  860889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0520 11:53:23.399881  860889 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:53:23.400393  860889 main.go:141] libmachine: Using API Version  1
	I0520 11:53:23.400431  860889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:53:23.400795  860889 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:53:23.401015  860889 main.go:141] libmachine: (addons-972916) Calling .GetState
	I0520 11:53:23.402486  860889 main.go:141] libmachine: (addons-972916) Calling .DriverName
	I0520 11:53:23.402729  860889 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 11:53:23.402758  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHHostname
	I0520 11:53:23.405231  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:23.405620  860889 main.go:141] libmachine: (addons-972916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4b:82", ip: ""} in network mk-addons-972916: {Iface:virbr1 ExpiryTime:2024-05-20 12:52:34 +0000 UTC Type:0 Mac:52:54:00:df:4b:82 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-972916 Clientid:01:52:54:00:df:4b:82}
	I0520 11:53:23.405651  860889 main.go:141] libmachine: (addons-972916) DBG | domain addons-972916 has defined IP address 192.168.39.206 and MAC address 52:54:00:df:4b:82 in network mk-addons-972916
	I0520 11:53:23.405810  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHPort
	I0520 11:53:23.406011  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHKeyPath
	I0520 11:53:23.406188  860889 main.go:141] libmachine: (addons-972916) Calling .GetSSHUsername
	I0520 11:53:23.406323  860889 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/addons-972916/id_rsa Username:docker}
	I0520 11:53:23.749950  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.576506801s)
	I0520 11:53:23.749989  860889 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.57195295s)
	I0520 11:53:23.749959  860889 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.572114363s)
	I0520 11:53:23.750053  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.544136349s)
	I0520 11:53:23.750080  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750093  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750168  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.53613368s)
	I0520 11:53:23.750211  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.110267675s)
	I0520 11:53:23.750219  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750232  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750002  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750268  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750279  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.054783604s)
	I0520 11:53:23.750300  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750313  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750183  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.227570072s)
	I0520 11:53:23.750008  860889 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 11:53:23.750353  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750368  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750240  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750446  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.896265685s)
	I0520 11:53:23.750241  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750466  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750469  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.750475  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750506  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.750514  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.750522  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750527  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.750530  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750535  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.750533  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.784624247s)
	I0520 11:53:23.750543  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750550  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750558  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750567  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.750648  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.246934279s)
	W0520 11:53:23.750691  860889 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 11:53:23.750716  860889 retry.go:31] will retry after 157.39119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 11:53:23.750806  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.810599419s)
	I0520 11:53:23.750827  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.750835  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.751118  860889 node_ready.go:35] waiting up to 6m0s for node "addons-972916" to be "Ready" ...
	I0520 11:53:23.753288  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753328  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753335  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.753344  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.753351  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.753408  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753414  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753431  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.753431  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753441  860889 addons.go:470] Verifying addon ingress=true in "addons-972916"
	I0520 11:53:23.753452  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753458  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.753465  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.753471  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.755482  860889 out.go:177] * Verifying ingress addon...
	I0520 11:53:23.753525  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753543  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753561  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753580  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753596  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753613  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753649  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753683  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753702  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753717  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753734  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753748  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753764  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.753791  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.753809  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.756888  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.756886  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.756905  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.756904  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.756915  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.756930  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.756913  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.757008  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.758327  860889 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-972916 service yakd-dashboard -n yakd-dashboard
	
	I0520 11:53:23.756922  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.758362  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.757019  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.758419  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.757027  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.758454  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.758469  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.757068  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.758505  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.758513  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.757300  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.757307  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.758571  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.758602  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.758611  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.758614  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.757679  860889 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 11:53:23.758720  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.758730  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.758747  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.759058  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.759092  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.760171  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.759099  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.760183  860889 addons.go:470] Verifying addon metrics-server=true in "addons-972916"
	I0520 11:53:23.759108  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.760222  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.760244  860889 addons.go:470] Verifying addon registry=true in "addons-972916"
	I0520 11:53:23.761642  860889 out.go:177] * Verifying registry addon...
	I0520 11:53:23.763875  860889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 11:53:23.789068  860889 node_ready.go:49] node "addons-972916" has status "Ready":"True"
	I0520 11:53:23.789097  860889 node_ready.go:38] duration metric: took 37.955693ms for node "addons-972916" to be "Ready" ...
	I0520 11:53:23.789108  860889 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:53:23.790906  860889 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 11:53:23.790929  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:23.803857  860889 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 11:53:23.803887  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:23.839257  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.839283  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.839580  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.839601  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	W0520 11:53:23.839696  860889 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0520 11:53:23.852216  860889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fkhm8" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:23.865615  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:23.865643  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:23.866015  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:23.866015  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:23.866053  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:23.909009  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 11:53:23.959516  860889 pod_ready.go:92] pod "coredns-7db6d8ff4d-fkhm8" in "kube-system" namespace has status "Ready":"True"
	I0520 11:53:23.959551  860889 pod_ready.go:81] duration metric: took 107.299518ms for pod "coredns-7db6d8ff4d-fkhm8" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:23.959566  860889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwbv8" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.007710  860889 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwbv8" in "kube-system" namespace has status "Ready":"True"
	I0520 11:53:24.007737  860889 pod_ready.go:81] duration metric: took 48.163617ms for pod "coredns-7db6d8ff4d-jwbv8" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.007751  860889 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-972916" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.038209  860889 pod_ready.go:92] pod "etcd-addons-972916" in "kube-system" namespace has status "Ready":"True"
	I0520 11:53:24.038236  860889 pod_ready.go:81] duration metric: took 30.476116ms for pod "etcd-addons-972916" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.038249  860889 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-972916" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.056374  860889 pod_ready.go:92] pod "kube-apiserver-addons-972916" in "kube-system" namespace has status "Ready":"True"
	I0520 11:53:24.056405  860889 pod_ready.go:81] duration metric: took 18.144538ms for pod "kube-apiserver-addons-972916" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.056419  860889 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-972916" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.156097  860889 pod_ready.go:92] pod "kube-controller-manager-addons-972916" in "kube-system" namespace has status "Ready":"True"
	I0520 11:53:24.156125  860889 pod_ready.go:81] duration metric: took 99.697106ms for pod "kube-controller-manager-addons-972916" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.156142  860889 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7zpx" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.254867  860889 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-972916" context rescaled to 1 replicas
	I0520 11:53:24.263418  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:24.269528  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:24.556770  860889 pod_ready.go:92] pod "kube-proxy-m7zpx" in "kube-system" namespace has status "Ready":"True"
	I0520 11:53:24.556803  860889 pod_ready.go:81] duration metric: took 400.653299ms for pod "kube-proxy-m7zpx" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.556815  860889 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-972916" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.765963  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:24.768719  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:24.965247  860889 pod_ready.go:92] pod "kube-scheduler-addons-972916" in "kube-system" namespace has status "Ready":"True"
	I0520 11:53:24.965286  860889 pod_ready.go:81] duration metric: took 408.459511ms for pod "kube-scheduler-addons-972916" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:24.965302  860889 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace to be "Ready" ...
	I0520 11:53:25.337428  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:25.337430  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:25.501609  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.673647949s)
	I0520 11:53:25.501628  860889 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.098873271s)
	I0520 11:53:25.503089  860889 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 11:53:25.501681  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:25.504371  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:25.505596  860889 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 11:53:25.504641  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:25.505635  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:25.505649  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:25.505657  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:25.504759  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:25.507004  860889 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 11:53:25.507026  860889 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 11:53:25.506063  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:25.507102  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:25.507117  860889 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-972916"
	I0520 11:53:25.506088  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:25.509299  860889 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 11:53:25.511203  860889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 11:53:25.542715  860889 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 11:53:25.542743  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:25.746764  860889 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 11:53:25.746802  860889 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 11:53:25.763343  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:25.778786  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:25.864079  860889 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 11:53:25.864105  860889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 11:53:25.901957  860889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 11:53:26.023659  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:26.262444  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:26.280519  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:26.316318  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.407243589s)
	I0520 11:53:26.316417  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:26.316434  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:26.316833  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:26.316853  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:26.316862  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:26.316871  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:26.317171  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:26.317188  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:26.317218  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:26.516330  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:26.763464  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:26.768709  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:26.972834  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:27.019572  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:27.263389  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:27.269828  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:27.540047  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:27.773114  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:27.795116  860889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.893118259s)
	I0520 11:53:27.795178  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:27.795193  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:27.795555  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:27.795573  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:27.795584  860889 main.go:141] libmachine: Making call to close driver server
	I0520 11:53:27.795603  860889 main.go:141] libmachine: (addons-972916) Calling .Close
	I0520 11:53:27.795876  860889 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:53:27.795900  860889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:53:27.795909  860889 main.go:141] libmachine: (addons-972916) DBG | Closing plugin on server side
	I0520 11:53:27.797716  860889 addons.go:470] Verifying addon gcp-auth=true in "addons-972916"
	I0520 11:53:27.799434  860889 out.go:177] * Verifying gcp-auth addon...
	I0520 11:53:27.801546  860889 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 11:53:27.802175  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:27.815253  860889 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 11:53:27.815274  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:28.016799  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:28.263382  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:28.268792  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:28.305977  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:28.517514  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:28.763076  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:28.768993  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:28.804266  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:29.017209  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:29.263931  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:29.267535  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:29.305468  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:29.471800  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:29.517206  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:29.764336  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:29.770351  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:29.804753  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:30.017363  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:30.262460  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:30.268615  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:30.304802  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:30.516676  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:30.762953  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:30.768046  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:30.804875  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:31.016513  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:31.262515  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:31.268487  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:31.305024  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:31.472062  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:31.517861  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:31.763662  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:31.770341  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:31.805332  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:32.016899  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:32.264534  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:32.271642  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:32.306647  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:32.516483  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:32.763115  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:32.771418  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:32.805508  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:33.016659  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:33.262572  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:33.270948  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:33.305517  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:33.516480  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:33.764281  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:33.770649  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:33.805444  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:33.973844  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:34.016256  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:34.262263  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:34.268696  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:34.305616  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:34.519500  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:34.762563  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:34.767560  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:34.805498  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:35.017739  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:35.263974  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:35.271743  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:35.305948  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:35.517050  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:35.763283  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:35.768686  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:35.804666  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:36.017541  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:36.502709  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:36.502871  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:36.503070  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:36.518983  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:36.521409  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:36.764060  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:36.772457  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:36.805168  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:37.016844  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:37.263747  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:37.268817  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:37.306066  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:37.518278  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:37.762971  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:37.768396  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:37.805217  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:38.019272  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:38.263657  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:38.268601  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:38.305568  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:38.516963  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:38.762949  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:38.768886  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:38.805862  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:38.971348  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:39.019983  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:39.263481  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:39.268603  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:39.306772  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:39.516737  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:39.763174  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:39.768475  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:39.805467  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:40.018171  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:40.263716  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:40.268044  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:40.305664  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:40.535228  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:40.762976  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:40.768075  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:40.804705  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:40.972607  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:41.042627  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:41.265786  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:41.270601  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:41.311231  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:41.517494  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:41.763427  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:41.769484  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:41.806251  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:42.018011  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:42.263216  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:42.269344  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:42.304967  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:42.517876  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:42.763363  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:42.771575  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:42.805806  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:43.018503  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:43.264828  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:43.269506  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:43.306016  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:43.471339  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:43.519269  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:43.762614  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:43.767647  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:43.805598  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:44.016278  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:44.264585  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:44.268230  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:44.304837  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:44.517558  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:44.764052  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:44.772579  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:44.805700  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:45.017643  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:45.265977  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:45.268347  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:45.307796  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:45.472533  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:45.517037  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:45.763507  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:45.769690  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:45.805302  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:46.017255  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:46.264274  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:46.270312  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:46.305053  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:46.516891  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:46.763525  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:46.769391  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:46.805475  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:47.018273  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:47.262472  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:47.270272  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:47.305273  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:47.517306  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:47.763946  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:47.767574  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:47.805589  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:47.971192  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:48.017022  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:48.263501  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:48.268316  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:48.304817  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:48.517407  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:48.762611  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:48.768510  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:48.805281  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:49.017153  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:49.263369  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:49.268260  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:49.304933  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:49.516772  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:49.763890  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:49.769109  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:49.804991  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:49.973152  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:50.016971  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:50.263567  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:50.267587  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:50.305193  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:50.521448  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:50.767813  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:50.776052  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:50.806454  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:51.018761  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:51.264659  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:51.267927  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:51.306488  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:51.516861  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:51.764953  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:51.773061  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:51.805863  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:52.017046  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:52.263560  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:52.268775  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:52.305500  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:52.473947  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:52.524981  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:52.766110  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:52.770992  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:52.805363  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:53.016912  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:53.265861  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:53.271064  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:53.305086  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:53.516389  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:53.763267  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:53.776490  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:53.824030  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:54.018618  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:54.263325  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:54.268790  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:54.305072  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:54.516087  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:54.763375  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:54.768392  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:54.805649  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:54.970646  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:55.016717  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:55.263815  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:55.268925  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:55.306203  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:55.517474  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:55.763104  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:55.768161  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:55.807717  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:56.016640  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:56.263512  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:56.270359  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:56.304760  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:56.517044  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:56.764110  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:56.772201  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:56.805877  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:57.016062  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:57.263957  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:57.269359  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:57.305179  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:57.472209  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:57.517714  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:57.767320  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:57.774517  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 11:53:57.805178  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:58.016373  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:58.262960  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:58.268718  860889 kapi.go:107] duration metric: took 34.504837666s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 11:53:58.305336  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:58.518320  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:58.762796  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:58.806007  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:59.016579  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:59.262821  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:53:59.305264  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:59.482854  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:53:59.517538  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:53:59.972004  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:53:59.972612  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:00.017796  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:00.264371  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:00.305052  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:00.520509  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:00.763836  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:00.808296  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:01.037593  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:01.262782  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:01.311515  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:01.520736  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:01.763242  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:01.804972  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:01.985493  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:02.024756  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:02.262969  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:02.305463  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:02.516688  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:02.763578  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:02.805566  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:03.016840  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:03.263102  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:03.305452  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:03.534988  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:03.765752  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:03.818291  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:04.016409  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:04.266046  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:04.307229  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:04.473019  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:04.516923  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:04.762739  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:04.805488  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:05.016411  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:05.263128  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:05.306893  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:05.515682  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:05.763501  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:05.809019  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:06.015795  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:06.263090  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:06.307442  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:06.518607  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:06.763097  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:06.806171  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:06.971476  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:07.015937  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:07.263597  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:07.304767  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:07.517005  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:07.764531  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:07.804804  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:08.121189  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:08.263833  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:08.305284  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:08.515957  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:08.766115  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:08.811510  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:09.168810  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:09.175643  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:09.267800  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:09.305728  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:09.516954  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:09.762926  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:09.805944  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:10.015868  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:10.263102  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:10.305572  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:10.519929  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:10.863672  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:10.864443  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:11.017413  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:11.262370  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:11.305839  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:11.471435  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:11.516290  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:11.762788  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:11.805015  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:12.016674  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:12.267528  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:12.305124  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:12.516510  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:12.764628  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:12.806127  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:13.112749  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:13.266782  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:13.306947  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:13.474632  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:13.518711  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:13.763031  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:13.805643  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:14.017103  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:14.263419  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:14.308228  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:14.519581  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:14.767807  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:14.804759  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:15.023455  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:15.266208  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:15.306576  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:15.480083  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:15.516771  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:15.762255  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:15.805844  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:16.020488  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:16.266492  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:16.306474  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:16.519206  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:16.764375  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:16.805033  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:17.016649  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:17.263126  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:17.306188  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:17.516284  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:17.762643  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:17.805168  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:17.973970  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:18.017382  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:18.263103  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:18.305868  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:18.517239  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:18.764516  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:18.804654  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:19.021191  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:19.264033  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:19.306233  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:19.516695  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:19.762776  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:19.806401  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:19.976741  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:20.018263  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:20.263461  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:20.314396  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:20.517660  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:20.769764  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:20.807823  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:21.255143  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:21.270113  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:21.305229  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:21.518464  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:21.765186  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:21.805980  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:22.017300  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:22.263564  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:22.304811  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:22.471569  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:22.515973  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:22.763711  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:22.804891  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:23.016463  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 11:54:23.263083  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:23.305664  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:23.523152  860889 kapi.go:107] duration metric: took 58.011945116s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 11:54:23.763456  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:23.804633  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:24.263382  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:24.305524  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:24.763650  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:24.805522  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:24.971499  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:25.262914  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:25.305907  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:25.763385  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:25.804841  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:26.263120  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:26.305540  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:26.764448  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:26.805577  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:26.972103  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:27.263796  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:27.305144  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:27.763159  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:27.806511  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:28.263865  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:28.305803  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:28.763826  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:28.805179  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:29.265013  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:29.307952  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:29.474895  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:29.763219  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:29.805950  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:30.262648  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:30.305284  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:30.764217  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:30.806601  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:31.298109  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:31.305295  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:31.560589  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:31.764230  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:31.804454  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:32.263202  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:32.305936  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:32.763824  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:32.804986  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:33.263614  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:33.305759  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:34.011448  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:34.014014  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:34.017898  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:34.263890  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:34.305051  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:34.763400  860889 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 11:54:34.805413  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:35.264866  860889 kapi.go:107] duration metric: took 1m11.507185134s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 11:54:35.309472  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:35.804608  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:36.305887  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:36.472183  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:36.805895  860889 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 11:54:37.306160  860889 kapi.go:107] duration metric: took 1m9.504609866s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 11:54:37.307831  860889 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-972916 cluster.
	I0520 11:54:37.309084  860889 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 11:54:37.310271  860889 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 11:54:37.311506  860889 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, inspektor-gadget, helm-tiller, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0520 11:54:37.312593  860889 addons.go:505] duration metric: took 1m21.808673086s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner inspektor-gadget helm-tiller yakd metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0520 11:54:38.472304  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:40.971888  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:43.470895  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:45.471793  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:47.471993  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:49.972655  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:52.471850  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:54.972528  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:57.472268  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:54:59.474276  860889 pod_ready.go:102] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"False"
	I0520 11:55:01.975199  860889 pod_ready.go:92] pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace has status "Ready":"True"
	I0520 11:55:01.975225  860889 pod_ready.go:81] duration metric: took 1m37.009912659s for pod "metrics-server-c59844bb4-b8mnb" in "kube-system" namespace to be "Ready" ...
	I0520 11:55:01.975235  860889 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-spssf" in "kube-system" namespace to be "Ready" ...
	I0520 11:55:01.980997  860889 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-spssf" in "kube-system" namespace has status "Ready":"True"
	I0520 11:55:01.981024  860889 pod_ready.go:81] duration metric: took 5.782078ms for pod "nvidia-device-plugin-daemonset-spssf" in "kube-system" namespace to be "Ready" ...
	I0520 11:55:01.981046  860889 pod_ready.go:38] duration metric: took 1m38.191925119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:55:01.981072  860889 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:55:01.981107  860889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:55:01.981167  860889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:55:02.032680  860889 cri.go:89] found id: "916897bd975a0a351c7c83f7321612a46c472158961d13d9b6818ebd4d9907bc"
	I0520 11:55:02.032704  860889 cri.go:89] found id: ""
	I0520 11:55:02.032713  860889 logs.go:276] 1 containers: [916897bd975a0a351c7c83f7321612a46c472158961d13d9b6818ebd4d9907bc]
	I0520 11:55:02.032766  860889 ssh_runner.go:195] Run: which crictl
	I0520 11:55:02.040841  860889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:55:02.040917  860889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:55:02.087246  860889 cri.go:89] found id: "553e2ba8ddd2a40529c2cc84bbb2f897fe6f738707c5cdf2e953bcfcfd35f949"
	I0520 11:55:02.087275  860889 cri.go:89] found id: ""
	I0520 11:55:02.087285  860889 logs.go:276] 1 containers: [553e2ba8ddd2a40529c2cc84bbb2f897fe6f738707c5cdf2e953bcfcfd35f949]
	I0520 11:55:02.087352  860889 ssh_runner.go:195] Run: which crictl
	I0520 11:55:02.091613  860889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:55:02.091676  860889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:55:02.148083  860889 cri.go:89] found id: "0c820b0e09157472f0c27206a5e725128f1191c73ec3a39a6d84b27f7eff4fc2"
	I0520 11:55:02.148105  860889 cri.go:89] found id: ""
	I0520 11:55:02.148114  860889 logs.go:276] 1 containers: [0c820b0e09157472f0c27206a5e725128f1191c73ec3a39a6d84b27f7eff4fc2]
	I0520 11:55:02.148179  860889 ssh_runner.go:195] Run: which crictl
	I0520 11:55:02.152324  860889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:55:02.152397  860889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:55:02.190920  860889 cri.go:89] found id: "94480c6310fdb88066036c1ff0fc4b5104a27a85c86d9cdb7486f2c28084c707"
	I0520 11:55:02.190944  860889 cri.go:89] found id: ""
	I0520 11:55:02.190952  860889 logs.go:276] 1 containers: [94480c6310fdb88066036c1ff0fc4b5104a27a85c86d9cdb7486f2c28084c707]
	I0520 11:55:02.191002  860889 ssh_runner.go:195] Run: which crictl
	I0520 11:55:02.195444  860889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:55:02.195499  860889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:55:02.261210  860889 cri.go:89] found id: "c0492cede0af2ced055105eba7a993b8dc8fbcd13fc014eef300f48bde9c1a23"
	I0520 11:55:02.261235  860889 cri.go:89] found id: ""
	I0520 11:55:02.261244  860889 logs.go:276] 1 containers: [c0492cede0af2ced055105eba7a993b8dc8fbcd13fc014eef300f48bde9c1a23]
	I0520 11:55:02.261302  860889 ssh_runner.go:195] Run: which crictl
	I0520 11:55:02.265693  860889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:55:02.265769  860889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:55:02.306929  860889 cri.go:89] found id: "bf7683096a1c32f5dcb5049231754911fea96997281027aaa65c668427d28273"
	I0520 11:55:02.306954  860889 cri.go:89] found id: ""
	I0520 11:55:02.306962  860889 logs.go:276] 1 containers: [bf7683096a1c32f5dcb5049231754911fea96997281027aaa65c668427d28273]
	I0520 11:55:02.307034  860889 ssh_runner.go:195] Run: which crictl
	I0520 11:55:02.311462  860889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:55:02.311529  860889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:55:02.351009  860889 cri.go:89] found id: ""
	I0520 11:55:02.351045  860889 logs.go:276] 0 containers: []
	W0520 11:55:02.351057  860889 logs.go:278] No container was found matching "kindnet"
	I0520 11:55:02.351072  860889 logs.go:123] Gathering logs for kubelet ...
	I0520 11:55:02.351089  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 11:55:02.409473  860889 logs.go:138] Found kubelet problem: May 20 11:53:21 addons-972916 kubelet[1288]: W0520 11:53:21.734886    1288 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-972916" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-972916' and this object
	W0520 11:55:02.409645  860889 logs.go:138] Found kubelet problem: May 20 11:53:21 addons-972916 kubelet[1288]: E0520 11:53:21.735459    1288 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-972916" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-972916' and this object
	I0520 11:55:02.437932  860889 logs.go:123] Gathering logs for etcd [553e2ba8ddd2a40529c2cc84bbb2f897fe6f738707c5cdf2e953bcfcfd35f949] ...
	I0520 11:55:02.437966  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 553e2ba8ddd2a40529c2cc84bbb2f897fe6f738707c5cdf2e953bcfcfd35f949"
	I0520 11:55:02.504040  860889 logs.go:123] Gathering logs for kube-scheduler [94480c6310fdb88066036c1ff0fc4b5104a27a85c86d9cdb7486f2c28084c707] ...
	I0520 11:55:02.504076  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94480c6310fdb88066036c1ff0fc4b5104a27a85c86d9cdb7486f2c28084c707"
	I0520 11:55:02.572823  860889 logs.go:123] Gathering logs for kube-proxy [c0492cede0af2ced055105eba7a993b8dc8fbcd13fc014eef300f48bde9c1a23] ...
	I0520 11:55:02.572858  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0492cede0af2ced055105eba7a993b8dc8fbcd13fc014eef300f48bde9c1a23"
	I0520 11:55:02.636991  860889 logs.go:123] Gathering logs for kube-controller-manager [bf7683096a1c32f5dcb5049231754911fea96997281027aaa65c668427d28273] ...
	I0520 11:55:02.637021  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf7683096a1c32f5dcb5049231754911fea96997281027aaa65c668427d28273"
	I0520 11:55:02.699062  860889 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:55:02.699101  860889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-linux-amd64 start -p addons-972916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestErrorSpam/setup (39.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-836913 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-836913 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-836913 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-836913 --driver=kvm2  --container-runtime=crio: (39.642614709s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1"
error_spam_test.go:110: minikube stdout:
* [nospam-836913] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18932
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting "nospam-836913" primary control-plane node in "nospam-836913" cluster
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-836913" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
--- FAIL: TestErrorSpam/setup (39.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.311903706s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image ls: (2.237585134s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-195764" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 node stop m02 -v=7 --alsologtostderr
E0520 12:41:30.999884  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:41:51.480202  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:42:32.440607  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.469533281s)

                                                
                                                
-- stdout --
	* Stopping node "ha-252263-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:41:24.283962  878922 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:41:24.284214  878922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:41:24.284224  878922 out.go:304] Setting ErrFile to fd 2...
	I0520 12:41:24.284228  878922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:41:24.284423  878922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:41:24.284667  878922 mustload.go:65] Loading cluster: ha-252263
	I0520 12:41:24.285025  878922 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:41:24.285040  878922 stop.go:39] StopHost: ha-252263-m02
	I0520 12:41:24.285370  878922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:41:24.285430  878922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:41:24.302194  878922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0520 12:41:24.302610  878922 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:41:24.303233  878922 main.go:141] libmachine: Using API Version  1
	I0520 12:41:24.303260  878922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:41:24.303632  878922 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:41:24.306104  878922 out.go:177] * Stopping node "ha-252263-m02"  ...
	I0520 12:41:24.307384  878922 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 12:41:24.307430  878922 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:41:24.307653  878922 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 12:41:24.307688  878922 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:41:24.310649  878922 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:41:24.311123  878922 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:41:24.311150  878922 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:41:24.311336  878922 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:41:24.311503  878922 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:41:24.311662  878922 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:41:24.311781  878922 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:41:24.403604  878922 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 12:41:24.457368  878922 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 12:41:24.516788  878922 main.go:141] libmachine: Stopping "ha-252263-m02"...
	I0520 12:41:24.516820  878922 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:41:24.518562  878922 main.go:141] libmachine: (ha-252263-m02) Calling .Stop
	I0520 12:41:24.522595  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 0/120
	I0520 12:41:25.524023  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 1/120
	I0520 12:41:26.525417  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 2/120
	I0520 12:41:27.526799  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 3/120
	I0520 12:41:28.528760  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 4/120
	I0520 12:41:29.530766  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 5/120
	I0520 12:41:30.532093  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 6/120
	I0520 12:41:31.533556  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 7/120
	I0520 12:41:32.534801  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 8/120
	I0520 12:41:33.536170  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 9/120
	I0520 12:41:34.538401  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 10/120
	I0520 12:41:35.540438  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 11/120
	I0520 12:41:36.542009  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 12/120
	I0520 12:41:37.543650  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 13/120
	I0520 12:41:38.545177  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 14/120
	I0520 12:41:39.546522  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 15/120
	I0520 12:41:40.547799  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 16/120
	I0520 12:41:41.549353  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 17/120
	I0520 12:41:42.550756  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 18/120
	I0520 12:41:43.552488  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 19/120
	I0520 12:41:44.554398  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 20/120
	I0520 12:41:45.555824  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 21/120
	I0520 12:41:46.557232  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 22/120
	I0520 12:41:47.558829  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 23/120
	I0520 12:41:48.559995  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 24/120
	I0520 12:41:49.561819  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 25/120
	I0520 12:41:50.563058  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 26/120
	I0520 12:41:51.564541  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 27/120
	I0520 12:41:52.566077  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 28/120
	I0520 12:41:53.567479  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 29/120
	I0520 12:41:54.569544  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 30/120
	I0520 12:41:55.571958  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 31/120
	I0520 12:41:56.573606  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 32/120
	I0520 12:41:57.574937  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 33/120
	I0520 12:41:58.576306  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 34/120
	I0520 12:41:59.578297  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 35/120
	I0520 12:42:00.579809  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 36/120
	I0520 12:42:01.581390  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 37/120
	I0520 12:42:02.583146  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 38/120
	I0520 12:42:03.584478  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 39/120
	I0520 12:42:04.586411  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 40/120
	I0520 12:42:05.587819  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 41/120
	I0520 12:42:06.589457  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 42/120
	I0520 12:42:07.590652  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 43/120
	I0520 12:42:08.592067  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 44/120
	I0520 12:42:09.593813  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 45/120
	I0520 12:42:10.595292  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 46/120
	I0520 12:42:11.597251  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 47/120
	I0520 12:42:12.598871  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 48/120
	I0520 12:42:13.600205  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 49/120
	I0520 12:42:14.602475  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 50/120
	I0520 12:42:15.603887  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 51/120
	I0520 12:42:16.605139  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 52/120
	I0520 12:42:17.606541  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 53/120
	I0520 12:42:18.608183  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 54/120
	I0520 12:42:19.610144  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 55/120
	I0520 12:42:20.611239  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 56/120
	I0520 12:42:21.613419  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 57/120
	I0520 12:42:22.614632  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 58/120
	I0520 12:42:23.616006  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 59/120
	I0520 12:42:24.617508  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 60/120
	I0520 12:42:25.619036  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 61/120
	I0520 12:42:26.620079  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 62/120
	I0520 12:42:27.621641  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 63/120
	I0520 12:42:28.622913  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 64/120
	I0520 12:42:29.624702  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 65/120
	I0520 12:42:30.626215  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 66/120
	I0520 12:42:31.627540  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 67/120
	I0520 12:42:32.629392  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 68/120
	I0520 12:42:33.630615  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 69/120
	I0520 12:42:34.632051  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 70/120
	I0520 12:42:35.633452  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 71/120
	I0520 12:42:36.634873  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 72/120
	I0520 12:42:37.636427  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 73/120
	I0520 12:42:38.637781  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 74/120
	I0520 12:42:39.639666  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 75/120
	I0520 12:42:40.641043  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 76/120
	I0520 12:42:41.642666  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 77/120
	I0520 12:42:42.644231  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 78/120
	I0520 12:42:43.645768  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 79/120
	I0520 12:42:44.647515  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 80/120
	I0520 12:42:45.649313  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 81/120
	I0520 12:42:46.650938  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 82/120
	I0520 12:42:47.652391  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 83/120
	I0520 12:42:48.654483  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 84/120
	I0520 12:42:49.655815  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 85/120
	I0520 12:42:50.657349  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 86/120
	I0520 12:42:51.658790  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 87/120
	I0520 12:42:52.661080  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 88/120
	I0520 12:42:53.662314  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 89/120
	I0520 12:42:54.665021  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 90/120
	I0520 12:42:55.666293  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 91/120
	I0520 12:42:56.667770  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 92/120
	I0520 12:42:57.669035  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 93/120
	I0520 12:42:58.670509  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 94/120
	I0520 12:42:59.672398  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 95/120
	I0520 12:43:00.673591  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 96/120
	I0520 12:43:01.675029  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 97/120
	I0520 12:43:02.677077  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 98/120
	I0520 12:43:03.678777  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 99/120
	I0520 12:43:04.680847  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 100/120
	I0520 12:43:05.682438  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 101/120
	I0520 12:43:06.683736  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 102/120
	I0520 12:43:07.684931  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 103/120
	I0520 12:43:08.686820  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 104/120
	I0520 12:43:09.688774  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 105/120
	I0520 12:43:10.690173  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 106/120
	I0520 12:43:11.691581  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 107/120
	I0520 12:43:12.693103  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 108/120
	I0520 12:43:13.694414  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 109/120
	I0520 12:43:14.696485  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 110/120
	I0520 12:43:15.698115  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 111/120
	I0520 12:43:16.699778  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 112/120
	I0520 12:43:17.701255  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 113/120
	I0520 12:43:18.702727  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 114/120
	I0520 12:43:19.704334  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 115/120
	I0520 12:43:20.705788  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 116/120
	I0520 12:43:21.707269  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 117/120
	I0520 12:43:22.709374  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 118/120
	I0520 12:43:23.710580  878922 main.go:141] libmachine: (ha-252263-m02) Waiting for machine to stop 119/120
	I0520 12:43:24.711286  878922 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 12:43:24.711443  878922 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-252263 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 3 (19.028467191s)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:43:24.757192  879356 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:43:24.757483  879356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:43:24.757495  879356 out.go:304] Setting ErrFile to fd 2...
	I0520 12:43:24.757499  879356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:43:24.757738  879356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:43:24.757954  879356 out.go:298] Setting JSON to false
	I0520 12:43:24.757989  879356 mustload.go:65] Loading cluster: ha-252263
	I0520 12:43:24.758044  879356 notify.go:220] Checking for updates...
	I0520 12:43:24.758401  879356 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:43:24.758420  879356 status.go:255] checking status of ha-252263 ...
	I0520 12:43:24.758821  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:24.758909  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:24.779707  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0520 12:43:24.780114  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:24.780849  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:24.780888  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:24.781367  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:24.781612  879356 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:43:24.783179  879356 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:43:24.783199  879356 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:43:24.783480  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:24.783516  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:24.797861  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0520 12:43:24.798261  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:24.798695  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:24.798718  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:24.799121  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:24.799320  879356 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:43:24.802257  879356 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:24.802713  879356 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:43:24.802741  879356 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:24.802827  879356 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:43:24.803122  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:24.803161  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:24.818441  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0520 12:43:24.818876  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:24.819388  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:24.819408  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:24.819671  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:24.819863  879356 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:43:24.820037  879356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:24.820065  879356 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:43:24.822670  879356 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:24.823066  879356 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:43:24.823087  879356 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:24.823221  879356 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:43:24.823413  879356 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:43:24.823557  879356 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:43:24.823715  879356 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:43:24.914634  879356 ssh_runner.go:195] Run: systemctl --version
	I0520 12:43:24.924650  879356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:24.942356  879356 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:43:24.942406  879356 api_server.go:166] Checking apiserver status ...
	I0520 12:43:24.942455  879356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:43:24.960016  879356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:43:24.969655  879356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:43:24.969719  879356 ssh_runner.go:195] Run: ls
	I0520 12:43:24.974305  879356 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:43:24.978651  879356 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:43:24.978672  879356 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:43:24.978683  879356 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:43:24.978701  879356 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:43:24.979031  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:24.979073  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:24.994155  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0520 12:43:24.994578  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:24.995158  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:24.995178  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:24.995568  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:24.995981  879356 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:43:24.997529  879356 status.go:330] ha-252263-m02 host status = "Running" (err=<nil>)
	I0520 12:43:24.997545  879356 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:43:24.997826  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:24.997862  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:25.012383  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0520 12:43:25.012770  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:25.013250  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:25.013275  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:25.013570  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:25.013765  879356 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:43:25.016564  879356 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:25.017020  879356 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:43:25.017054  879356 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:25.017151  879356 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:43:25.017438  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:25.017478  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:25.031691  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
	I0520 12:43:25.032043  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:25.032467  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:25.032486  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:25.032781  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:25.032956  879356 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:43:25.033150  879356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:25.033172  879356 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:43:25.035784  879356 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:25.036140  879356 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:43:25.036177  879356 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:25.036318  879356 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:43:25.036504  879356 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:43:25.036683  879356 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:43:25.036796  879356 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	W0520 12:43:43.383041  879356 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:43:43.383183  879356 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0520 12:43:43.383201  879356 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:43:43.383210  879356 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 12:43:43.383237  879356 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:43:43.383245  879356 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:43:43.383554  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:43.383601  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:43.398777  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0520 12:43:43.399303  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:43.399821  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:43.399844  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:43.400203  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:43.400396  879356 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:43:43.402034  879356 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:43:43.402056  879356 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:43:43.402480  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:43.402555  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:43.418310  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0520 12:43:43.418802  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:43.419346  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:43.419369  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:43.419702  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:43.419876  879356 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:43:43.422667  879356 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:43.423129  879356 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:43:43.423155  879356 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:43.423281  879356 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:43:43.423658  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:43.423719  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:43.438371  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0520 12:43:43.438939  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:43.439470  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:43.439496  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:43.439859  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:43.440062  879356 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:43:43.440312  879356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:43.440346  879356 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:43:43.443146  879356 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:43.443684  879356 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:43:43.443709  879356 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:43.443879  879356 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:43:43.444077  879356 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:43:43.444270  879356 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:43:43.444439  879356 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:43:43.524140  879356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:43.541654  879356 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:43:43.541687  879356 api_server.go:166] Checking apiserver status ...
	I0520 12:43:43.541721  879356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:43:43.557201  879356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:43:43.567151  879356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:43:43.567209  879356 ssh_runner.go:195] Run: ls
	I0520 12:43:43.571888  879356 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:43:43.576529  879356 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:43:43.576552  879356 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:43:43.576560  879356 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:43:43.576576  879356 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:43:43.576865  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:43.576911  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:43.593461  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39989
	I0520 12:43:43.593899  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:43.594351  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:43.594384  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:43.594695  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:43.594896  879356 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:43:43.596577  879356 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:43:43.596599  879356 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:43:43.596977  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:43.597017  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:43.612407  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I0520 12:43:43.613135  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:43.613791  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:43.613821  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:43.614172  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:43.614377  879356 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:43:43.617381  879356 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:43.617836  879356 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:43:43.617862  879356 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:43.617988  879356 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:43:43.618344  879356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:43.618399  879356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:43.634463  879356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I0520 12:43:43.634862  879356 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:43.635298  879356 main.go:141] libmachine: Using API Version  1
	I0520 12:43:43.635318  879356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:43.635629  879356 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:43.635856  879356 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:43:43.636056  879356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:43.636083  879356 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:43:43.638571  879356 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:43.639083  879356 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:43:43.639105  879356 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:43.639285  879356 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:43:43.639459  879356 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:43:43.639603  879356 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:43:43.639883  879356 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:43:43.723227  879356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:43.739325  879356 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-252263 -n ha-252263
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-252263 logs -n 25: (1.518928193s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263:/home/docker/cp-test_ha-252263-m03_ha-252263.txt                      |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263 sudo cat                                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263.txt                                |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m02:/home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m02 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04:/home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m04 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp testdata/cp-test.txt                                               | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263:/home/docker/cp-test_ha-252263-m04_ha-252263.txt                      |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263 sudo cat                                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263.txt                                |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m02:/home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m02 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03:/home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m03 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-252263 node stop m02 -v=7                                                    | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:36:55
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:36:55.522714  874942 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:36:55.522874  874942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:36:55.522887  874942 out.go:304] Setting ErrFile to fd 2...
	I0520 12:36:55.522894  874942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:36:55.523072  874942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:36:55.523607  874942 out.go:298] Setting JSON to false
	I0520 12:36:55.524517  874942 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8363,"bootTime":1716200252,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:36:55.524575  874942 start.go:139] virtualization: kvm guest
	I0520 12:36:55.527010  874942 out.go:177] * [ha-252263] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:36:55.528911  874942 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 12:36:55.528891  874942 notify.go:220] Checking for updates...
	I0520 12:36:55.530376  874942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:36:55.532190  874942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:36:55.533798  874942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:36:55.535218  874942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:36:55.536593  874942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:36:55.537952  874942 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:36:55.572727  874942 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 12:36:55.574239  874942 start.go:297] selected driver: kvm2
	I0520 12:36:55.574259  874942 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:36:55.574285  874942 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:36:55.574963  874942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:36:55.575027  874942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:36:55.590038  874942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:36:55.590091  874942 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:36:55.590281  874942 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:36:55.590307  874942 cni.go:84] Creating CNI manager for ""
	I0520 12:36:55.590313  874942 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 12:36:55.590318  874942 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 12:36:55.590361  874942 start.go:340] cluster config:
	{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0520 12:36:55.590466  874942 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:36:55.592333  874942 out.go:177] * Starting "ha-252263" primary control-plane node in "ha-252263" cluster
	I0520 12:36:55.593688  874942 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:36:55.593726  874942 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:36:55.593737  874942 cache.go:56] Caching tarball of preloaded images
	I0520 12:36:55.593836  874942 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:36:55.593852  874942 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:36:55.594156  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:36:55.594179  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json: {Name:mka44a3102880bc08a5134e6709927ed82a08e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:36:55.594300  874942 start.go:360] acquireMachinesLock for ha-252263: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:36:55.594327  874942 start.go:364] duration metric: took 14.32µs to acquireMachinesLock for "ha-252263"
	I0520 12:36:55.594340  874942 start.go:93] Provisioning new machine with config: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:36:55.594393  874942 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 12:36:55.596074  874942 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 12:36:55.596211  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:36:55.596256  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:36:55.610363  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0520 12:36:55.610775  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:36:55.611351  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:36:55.611372  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:36:55.611698  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:36:55.611917  874942 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:36:55.612091  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:36:55.612238  874942 start.go:159] libmachine.API.Create for "ha-252263" (driver="kvm2")
	I0520 12:36:55.612270  874942 client.go:168] LocalClient.Create starting
	I0520 12:36:55.612299  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 12:36:55.612334  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:36:55.612347  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:36:55.612399  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 12:36:55.612416  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:36:55.612428  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:36:55.612443  874942 main.go:141] libmachine: Running pre-create checks...
	I0520 12:36:55.612453  874942 main.go:141] libmachine: (ha-252263) Calling .PreCreateCheck
	I0520 12:36:55.612849  874942 main.go:141] libmachine: (ha-252263) Calling .GetConfigRaw
	I0520 12:36:55.613200  874942 main.go:141] libmachine: Creating machine...
	I0520 12:36:55.613212  874942 main.go:141] libmachine: (ha-252263) Calling .Create
	I0520 12:36:55.613356  874942 main.go:141] libmachine: (ha-252263) Creating KVM machine...
	I0520 12:36:55.614585  874942 main.go:141] libmachine: (ha-252263) DBG | found existing default KVM network
	I0520 12:36:55.615317  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:55.615186  874965 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0520 12:36:55.615333  874942 main.go:141] libmachine: (ha-252263) DBG | created network xml: 
	I0520 12:36:55.615342  874942 main.go:141] libmachine: (ha-252263) DBG | <network>
	I0520 12:36:55.615347  874942 main.go:141] libmachine: (ha-252263) DBG |   <name>mk-ha-252263</name>
	I0520 12:36:55.615353  874942 main.go:141] libmachine: (ha-252263) DBG |   <dns enable='no'/>
	I0520 12:36:55.615357  874942 main.go:141] libmachine: (ha-252263) DBG |   
	I0520 12:36:55.615363  874942 main.go:141] libmachine: (ha-252263) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 12:36:55.615379  874942 main.go:141] libmachine: (ha-252263) DBG |     <dhcp>
	I0520 12:36:55.615388  874942 main.go:141] libmachine: (ha-252263) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 12:36:55.615393  874942 main.go:141] libmachine: (ha-252263) DBG |     </dhcp>
	I0520 12:36:55.615417  874942 main.go:141] libmachine: (ha-252263) DBG |   </ip>
	I0520 12:36:55.615434  874942 main.go:141] libmachine: (ha-252263) DBG |   
	I0520 12:36:55.615445  874942 main.go:141] libmachine: (ha-252263) DBG | </network>
	I0520 12:36:55.615454  874942 main.go:141] libmachine: (ha-252263) DBG | 
	I0520 12:36:55.620329  874942 main.go:141] libmachine: (ha-252263) DBG | trying to create private KVM network mk-ha-252263 192.168.39.0/24...
	I0520 12:36:55.682543  874942 main.go:141] libmachine: (ha-252263) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263 ...
	I0520 12:36:55.682589  874942 main.go:141] libmachine: (ha-252263) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:36:55.682601  874942 main.go:141] libmachine: (ha-252263) DBG | private KVM network mk-ha-252263 192.168.39.0/24 created
	I0520 12:36:55.682619  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:55.682449  874965 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:36:55.682643  874942 main.go:141] libmachine: (ha-252263) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:36:55.943494  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:55.943374  874965 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa...
	I0520 12:36:56.155305  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:56.155140  874965 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/ha-252263.rawdisk...
	I0520 12:36:56.155334  874942 main.go:141] libmachine: (ha-252263) DBG | Writing magic tar header
	I0520 12:36:56.155360  874942 main.go:141] libmachine: (ha-252263) DBG | Writing SSH key tar header
	I0520 12:36:56.155372  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:56.155274  874965 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263 ...
	I0520 12:36:56.155395  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263
	I0520 12:36:56.155431  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263 (perms=drwx------)
	I0520 12:36:56.155447  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 12:36:56.155455  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:36:56.155465  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 12:36:56.155472  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 12:36:56.155479  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:36:56.155485  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:36:56.155492  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:36:56.155498  874942 main.go:141] libmachine: (ha-252263) Creating domain...
	I0520 12:36:56.155511  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 12:36:56.155516  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:36:56.155534  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:36:56.155553  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home
	I0520 12:36:56.155565  874942 main.go:141] libmachine: (ha-252263) DBG | Skipping /home - not owner
	I0520 12:36:56.156717  874942 main.go:141] libmachine: (ha-252263) define libvirt domain using xml: 
	I0520 12:36:56.156728  874942 main.go:141] libmachine: (ha-252263) <domain type='kvm'>
	I0520 12:36:56.156740  874942 main.go:141] libmachine: (ha-252263)   <name>ha-252263</name>
	I0520 12:36:56.156745  874942 main.go:141] libmachine: (ha-252263)   <memory unit='MiB'>2200</memory>
	I0520 12:36:56.156751  874942 main.go:141] libmachine: (ha-252263)   <vcpu>2</vcpu>
	I0520 12:36:56.156755  874942 main.go:141] libmachine: (ha-252263)   <features>
	I0520 12:36:56.156760  874942 main.go:141] libmachine: (ha-252263)     <acpi/>
	I0520 12:36:56.156765  874942 main.go:141] libmachine: (ha-252263)     <apic/>
	I0520 12:36:56.156775  874942 main.go:141] libmachine: (ha-252263)     <pae/>
	I0520 12:36:56.156796  874942 main.go:141] libmachine: (ha-252263)     
	I0520 12:36:56.156808  874942 main.go:141] libmachine: (ha-252263)   </features>
	I0520 12:36:56.156825  874942 main.go:141] libmachine: (ha-252263)   <cpu mode='host-passthrough'>
	I0520 12:36:56.156835  874942 main.go:141] libmachine: (ha-252263)   
	I0520 12:36:56.156839  874942 main.go:141] libmachine: (ha-252263)   </cpu>
	I0520 12:36:56.156844  874942 main.go:141] libmachine: (ha-252263)   <os>
	I0520 12:36:56.156851  874942 main.go:141] libmachine: (ha-252263)     <type>hvm</type>
	I0520 12:36:56.156857  874942 main.go:141] libmachine: (ha-252263)     <boot dev='cdrom'/>
	I0520 12:36:56.156863  874942 main.go:141] libmachine: (ha-252263)     <boot dev='hd'/>
	I0520 12:36:56.156869  874942 main.go:141] libmachine: (ha-252263)     <bootmenu enable='no'/>
	I0520 12:36:56.156875  874942 main.go:141] libmachine: (ha-252263)   </os>
	I0520 12:36:56.156880  874942 main.go:141] libmachine: (ha-252263)   <devices>
	I0520 12:36:56.156887  874942 main.go:141] libmachine: (ha-252263)     <disk type='file' device='cdrom'>
	I0520 12:36:56.156894  874942 main.go:141] libmachine: (ha-252263)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/boot2docker.iso'/>
	I0520 12:36:56.156904  874942 main.go:141] libmachine: (ha-252263)       <target dev='hdc' bus='scsi'/>
	I0520 12:36:56.156934  874942 main.go:141] libmachine: (ha-252263)       <readonly/>
	I0520 12:36:56.156960  874942 main.go:141] libmachine: (ha-252263)     </disk>
	I0520 12:36:56.156995  874942 main.go:141] libmachine: (ha-252263)     <disk type='file' device='disk'>
	I0520 12:36:56.157020  874942 main.go:141] libmachine: (ha-252263)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:36:56.157048  874942 main.go:141] libmachine: (ha-252263)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/ha-252263.rawdisk'/>
	I0520 12:36:56.157059  874942 main.go:141] libmachine: (ha-252263)       <target dev='hda' bus='virtio'/>
	I0520 12:36:56.157070  874942 main.go:141] libmachine: (ha-252263)     </disk>
	I0520 12:36:56.157081  874942 main.go:141] libmachine: (ha-252263)     <interface type='network'>
	I0520 12:36:56.157096  874942 main.go:141] libmachine: (ha-252263)       <source network='mk-ha-252263'/>
	I0520 12:36:56.157113  874942 main.go:141] libmachine: (ha-252263)       <model type='virtio'/>
	I0520 12:36:56.157121  874942 main.go:141] libmachine: (ha-252263)     </interface>
	I0520 12:36:56.157125  874942 main.go:141] libmachine: (ha-252263)     <interface type='network'>
	I0520 12:36:56.157133  874942 main.go:141] libmachine: (ha-252263)       <source network='default'/>
	I0520 12:36:56.157137  874942 main.go:141] libmachine: (ha-252263)       <model type='virtio'/>
	I0520 12:36:56.157144  874942 main.go:141] libmachine: (ha-252263)     </interface>
	I0520 12:36:56.157148  874942 main.go:141] libmachine: (ha-252263)     <serial type='pty'>
	I0520 12:36:56.157156  874942 main.go:141] libmachine: (ha-252263)       <target port='0'/>
	I0520 12:36:56.157160  874942 main.go:141] libmachine: (ha-252263)     </serial>
	I0520 12:36:56.157168  874942 main.go:141] libmachine: (ha-252263)     <console type='pty'>
	I0520 12:36:56.157172  874942 main.go:141] libmachine: (ha-252263)       <target type='serial' port='0'/>
	I0520 12:36:56.157181  874942 main.go:141] libmachine: (ha-252263)     </console>
	I0520 12:36:56.157194  874942 main.go:141] libmachine: (ha-252263)     <rng model='virtio'>
	I0520 12:36:56.157212  874942 main.go:141] libmachine: (ha-252263)       <backend model='random'>/dev/random</backend>
	I0520 12:36:56.157224  874942 main.go:141] libmachine: (ha-252263)     </rng>
	I0520 12:36:56.157231  874942 main.go:141] libmachine: (ha-252263)     
	I0520 12:36:56.157241  874942 main.go:141] libmachine: (ha-252263)     
	I0520 12:36:56.157250  874942 main.go:141] libmachine: (ha-252263)   </devices>
	I0520 12:36:56.157260  874942 main.go:141] libmachine: (ha-252263) </domain>
	I0520 12:36:56.157272  874942 main.go:141] libmachine: (ha-252263) 
	I0520 12:36:56.161707  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:61:1a:1b in network default
	I0520 12:36:56.162323  874942 main.go:141] libmachine: (ha-252263) Ensuring networks are active...
	I0520 12:36:56.162338  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:56.163052  874942 main.go:141] libmachine: (ha-252263) Ensuring network default is active
	I0520 12:36:56.163402  874942 main.go:141] libmachine: (ha-252263) Ensuring network mk-ha-252263 is active
	I0520 12:36:56.163905  874942 main.go:141] libmachine: (ha-252263) Getting domain xml...
	I0520 12:36:56.164647  874942 main.go:141] libmachine: (ha-252263) Creating domain...
	I0520 12:36:57.336606  874942 main.go:141] libmachine: (ha-252263) Waiting to get IP...
	I0520 12:36:57.337492  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:57.337901  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:57.337948  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:57.337892  874965 retry.go:31] will retry after 268.398176ms: waiting for machine to come up
	I0520 12:36:57.608480  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:57.609017  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:57.609047  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:57.608953  874965 retry.go:31] will retry after 265.174618ms: waiting for machine to come up
	I0520 12:36:57.875542  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:57.876034  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:57.876070  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:57.875979  874965 retry.go:31] will retry after 479.627543ms: waiting for machine to come up
	I0520 12:36:58.357692  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:58.358108  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:58.358134  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:58.358071  874965 retry.go:31] will retry after 541.356153ms: waiting for machine to come up
	I0520 12:36:58.900870  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:58.901308  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:58.901338  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:58.901253  874965 retry.go:31] will retry after 533.411181ms: waiting for machine to come up
	I0520 12:36:59.436114  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:59.436492  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:59.436517  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:59.436445  874965 retry.go:31] will retry after 937.293304ms: waiting for machine to come up
	I0520 12:37:00.375519  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:00.375916  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:00.375948  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:00.375881  874965 retry.go:31] will retry after 1.113015434s: waiting for machine to come up
	I0520 12:37:01.490751  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:01.491160  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:01.491188  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:01.491106  874965 retry.go:31] will retry after 1.487308712s: waiting for machine to come up
	I0520 12:37:02.979983  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:02.980469  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:02.980503  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:02.980415  874965 retry.go:31] will retry after 1.285882127s: waiting for machine to come up
	I0520 12:37:04.267910  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:04.268417  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:04.268451  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:04.268344  874965 retry.go:31] will retry after 1.917962446s: waiting for machine to come up
	I0520 12:37:06.188323  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:06.188815  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:06.188859  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:06.188788  874965 retry.go:31] will retry after 1.809201113s: waiting for machine to come up
	I0520 12:37:07.999321  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:07.999724  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:07.999766  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:07.999684  874965 retry.go:31] will retry after 3.16325035s: waiting for machine to come up
	I0520 12:37:11.164245  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:11.164616  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:11.164638  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:11.164583  874965 retry.go:31] will retry after 3.344329876s: waiting for machine to come up
	I0520 12:37:14.512959  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:14.513408  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:14.513433  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:14.513355  874965 retry.go:31] will retry after 5.078434537s: waiting for machine to come up
	I0520 12:37:19.596279  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.596681  874942 main.go:141] libmachine: (ha-252263) Found IP for machine: 192.168.39.182
	I0520 12:37:19.596698  874942 main.go:141] libmachine: (ha-252263) Reserving static IP address...
	I0520 12:37:19.596707  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has current primary IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.597114  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find host DHCP lease matching {name: "ha-252263", mac: "52:54:00:44:6e:b0", ip: "192.168.39.182"} in network mk-ha-252263
	I0520 12:37:19.667372  874942 main.go:141] libmachine: (ha-252263) Reserved static IP address: 192.168.39.182
	I0520 12:37:19.667400  874942 main.go:141] libmachine: (ha-252263) Waiting for SSH to be available...
	I0520 12:37:19.667411  874942 main.go:141] libmachine: (ha-252263) DBG | Getting to WaitForSSH function...
	I0520 12:37:19.669900  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.670286  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:19.670311  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.670463  874942 main.go:141] libmachine: (ha-252263) DBG | Using SSH client type: external
	I0520 12:37:19.670481  874942 main.go:141] libmachine: (ha-252263) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa (-rw-------)
	I0520 12:37:19.670525  874942 main.go:141] libmachine: (ha-252263) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:37:19.670535  874942 main.go:141] libmachine: (ha-252263) DBG | About to run SSH command:
	I0520 12:37:19.670543  874942 main.go:141] libmachine: (ha-252263) DBG | exit 0
	I0520 12:37:19.794871  874942 main.go:141] libmachine: (ha-252263) DBG | SSH cmd err, output: <nil>: 
	I0520 12:37:19.795202  874942 main.go:141] libmachine: (ha-252263) KVM machine creation complete!
	I0520 12:37:19.795457  874942 main.go:141] libmachine: (ha-252263) Calling .GetConfigRaw
	I0520 12:37:19.796005  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:19.796227  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:19.796378  874942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:37:19.796396  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:37:19.797861  874942 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:37:19.797888  874942 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:37:19.797895  874942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:37:19.797900  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:19.799825  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.800157  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:19.800183  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.800322  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:19.800500  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:19.800659  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:19.800812  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:19.800974  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:19.801242  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:19.801257  874942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:37:19.910063  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:37:19.910088  874942 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:37:19.910095  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:19.912584  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.912962  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:19.912993  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.913131  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:19.913312  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:19.913491  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:19.913630  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:19.913787  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:19.913960  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:19.913972  874942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:37:20.019419  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:37:20.019485  874942 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:37:20.019495  874942 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:37:20.019504  874942 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:37:20.019781  874942 buildroot.go:166] provisioning hostname "ha-252263"
	I0520 12:37:20.019806  874942 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:37:20.019997  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.022669  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.023018  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.023039  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.023229  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:20.023399  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.023533  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.023638  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:20.023804  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:20.024021  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:20.024038  874942 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-252263 && echo "ha-252263" | sudo tee /etc/hostname
	I0520 12:37:20.144540  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263
	
	I0520 12:37:20.144573  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.147155  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.147543  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.147582  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.147775  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:20.147976  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.148139  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.148239  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:20.148364  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:20.148562  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:20.148579  874942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-252263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-252263/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-252263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:37:20.263591  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:37:20.263620  874942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 12:37:20.263663  874942 buildroot.go:174] setting up certificates
	I0520 12:37:20.263675  874942 provision.go:84] configureAuth start
	I0520 12:37:20.263688  874942 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:37:20.264004  874942 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:37:20.266512  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.266893  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.266924  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.267035  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.269193  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.269516  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.269542  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.269647  874942 provision.go:143] copyHostCerts
	I0520 12:37:20.269679  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:37:20.269709  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 12:37:20.269719  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:37:20.269782  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 12:37:20.269887  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:37:20.269908  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 12:37:20.269916  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:37:20.269942  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 12:37:20.269996  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:37:20.270013  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 12:37:20.270020  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:37:20.270040  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 12:37:20.270105  874942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.ha-252263 san=[127.0.0.1 192.168.39.182 ha-252263 localhost minikube]
	I0520 12:37:20.653179  874942 provision.go:177] copyRemoteCerts
	I0520 12:37:20.653240  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:37:20.653271  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.655925  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.656232  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.656265  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.656399  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:20.656583  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.656742  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:20.656915  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:20.741094  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 12:37:20.741182  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:37:20.765713  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 12:37:20.765806  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 12:37:20.789218  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 12:37:20.789295  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 12:37:20.812328  874942 provision.go:87] duration metric: took 548.635907ms to configureAuth
	I0520 12:37:20.812359  874942 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:37:20.812547  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:37:20.812628  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.815236  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.815567  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.815605  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.815802  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:20.816015  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.816188  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.816317  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:20.816496  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:20.816673  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:20.816689  874942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:37:21.075709  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:37:21.075746  874942 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:37:21.075759  874942 main.go:141] libmachine: (ha-252263) Calling .GetURL
	I0520 12:37:21.076990  874942 main.go:141] libmachine: (ha-252263) DBG | Using libvirt version 6000000
	I0520 12:37:21.079432  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.079759  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.079781  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.079969  874942 main.go:141] libmachine: Docker is up and running!
	I0520 12:37:21.079985  874942 main.go:141] libmachine: Reticulating splines...
	I0520 12:37:21.079994  874942 client.go:171] duration metric: took 25.467715983s to LocalClient.Create
	I0520 12:37:21.080021  874942 start.go:167] duration metric: took 25.467784578s to libmachine.API.Create "ha-252263"
	I0520 12:37:21.080032  874942 start.go:293] postStartSetup for "ha-252263" (driver="kvm2")
	I0520 12:37:21.080046  874942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:37:21.080070  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.080296  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:37:21.080320  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:21.082882  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.083291  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.083323  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.083402  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:21.083580  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.083765  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:21.083895  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:21.164840  874942 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:37:21.169194  874942 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:37:21.169222  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 12:37:21.169303  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 12:37:21.169404  874942 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 12:37:21.169417  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 12:37:21.169516  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:37:21.178738  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:37:21.201779  874942 start.go:296] duration metric: took 121.733252ms for postStartSetup
	I0520 12:37:21.201849  874942 main.go:141] libmachine: (ha-252263) Calling .GetConfigRaw
	I0520 12:37:21.202463  874942 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:37:21.205067  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.205409  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.205429  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.205735  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:37:21.205924  874942 start.go:128] duration metric: took 25.611519662s to createHost
	I0520 12:37:21.205950  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:21.208406  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.208797  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.208821  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.208984  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:21.209123  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.209251  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.209410  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:21.209551  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:21.209699  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:21.209712  874942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:37:21.315330  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716208641.294732996
	
	I0520 12:37:21.315362  874942 fix.go:216] guest clock: 1716208641.294732996
	I0520 12:37:21.315369  874942 fix.go:229] Guest: 2024-05-20 12:37:21.294732996 +0000 UTC Remote: 2024-05-20 12:37:21.205935394 +0000 UTC m=+25.717718406 (delta=88.797602ms)
	I0520 12:37:21.315421  874942 fix.go:200] guest clock delta is within tolerance: 88.797602ms
	I0520 12:37:21.315430  874942 start.go:83] releasing machines lock for "ha-252263", held for 25.721096085s
	I0520 12:37:21.315459  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.315708  874942 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:37:21.318184  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.318471  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.318495  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.318625  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.319172  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.319378  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.319453  874942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:37:21.319512  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:21.319566  874942 ssh_runner.go:195] Run: cat /version.json
	I0520 12:37:21.319587  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:21.322135  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.322360  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.322452  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.322469  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.322641  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:21.322779  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.322804  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.322813  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.322953  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:21.323025  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:21.323262  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:21.323304  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.323432  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:21.323551  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	W0520 12:37:21.399625  874942 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 12:37:21.399753  874942 ssh_runner.go:195] Run: systemctl --version
	I0520 12:37:21.422497  874942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:37:21.580652  874942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:37:21.587373  874942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:37:21.587432  874942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:37:21.603467  874942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:37:21.603492  874942 start.go:494] detecting cgroup driver to use...
	I0520 12:37:21.603586  874942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:37:21.620241  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:37:21.633588  874942 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:37:21.633633  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:37:21.646258  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:37:21.658897  874942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:37:21.773627  874942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:37:21.918419  874942 docker.go:233] disabling docker service ...
	I0520 12:37:21.918505  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:37:21.932965  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:37:21.945987  874942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:37:22.094195  874942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:37:22.214741  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:37:22.228699  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:37:22.246794  874942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:37:22.246880  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.256699  874942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:37:22.256760  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.266482  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.276338  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.286282  874942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:37:22.297032  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.306862  874942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.323683  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.333323  874942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:37:22.342159  874942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:37:22.342213  874942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:37:22.354542  874942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:37:22.364062  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:37:22.475186  874942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:37:22.607916  874942 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:37:22.607997  874942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:37:22.612724  874942 start.go:562] Will wait 60s for crictl version
	I0520 12:37:22.612888  874942 ssh_runner.go:195] Run: which crictl
	I0520 12:37:22.616685  874942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:37:22.655818  874942 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:37:22.655892  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:37:22.682470  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:37:22.711824  874942 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:37:22.712837  874942 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:37:22.715583  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:22.715925  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:22.715954  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:22.716133  874942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:37:22.720175  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:37:22.733326  874942 kubeadm.go:877] updating cluster {Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 12:37:22.733449  874942 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:37:22.733508  874942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:37:22.765473  874942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 12:37:22.765541  874942 ssh_runner.go:195] Run: which lz4
	I0520 12:37:22.769431  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 12:37:22.769515  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 12:37:22.773682  874942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 12:37:22.773713  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 12:37:24.141913  874942 crio.go:462] duration metric: took 1.372417849s to copy over tarball
	I0520 12:37:24.141993  874942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 12:37:26.194872  874942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.052823547s)
	I0520 12:37:26.194904  874942 crio.go:469] duration metric: took 2.052964592s to extract the tarball
	I0520 12:37:26.194914  874942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 12:37:26.238270  874942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:37:26.294122  874942 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:37:26.294148  874942 cache_images.go:84] Images are preloaded, skipping loading
	I0520 12:37:26.294157  874942 kubeadm.go:928] updating node { 192.168.39.182 8443 v1.30.1 crio true true} ...
	I0520 12:37:26.294285  874942 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-252263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:37:26.294378  874942 ssh_runner.go:195] Run: crio config
	I0520 12:37:26.338350  874942 cni.go:84] Creating CNI manager for ""
	I0520 12:37:26.338372  874942 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 12:37:26.338389  874942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 12:37:26.338416  874942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-252263 NodeName:ha-252263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 12:37:26.338561  874942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-252263"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 12:37:26.338584  874942 kube-vip.go:115] generating kube-vip config ...
	I0520 12:37:26.338627  874942 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 12:37:26.355710  874942 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 12:37:26.355831  874942 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0520 12:37:26.355884  874942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:37:26.365744  874942 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 12:37:26.365796  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 12:37:26.374782  874942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 12:37:26.390579  874942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:37:26.406515  874942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 12:37:26.422366  874942 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0520 12:37:26.438142  874942 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 12:37:26.441986  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:37:26.453891  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:37:26.576120  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:37:26.592196  874942 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263 for IP: 192.168.39.182
	I0520 12:37:26.592225  874942 certs.go:194] generating shared ca certs ...
	I0520 12:37:26.592248  874942 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:26.592433  874942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 12:37:26.592492  874942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 12:37:26.592506  874942 certs.go:256] generating profile certs ...
	I0520 12:37:26.592570  874942 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key
	I0520 12:37:26.592591  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt with IP's: []
	I0520 12:37:26.812850  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt ...
	I0520 12:37:26.812881  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt: {Name:mk923141f1efb3fc32fe7a6617fae7374249c3d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:26.813071  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key ...
	I0520 12:37:26.813086  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key: {Name:mkb137e09f84f93aec1540f80bb1a50c72c56e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:26.813193  874942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.905fa629
	I0520 12:37:26.813209  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.905fa629 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182 192.168.39.254]
	I0520 12:37:27.078051  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.905fa629 ...
	I0520 12:37:27.078085  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.905fa629: {Name:mkf853b6980b0a5db71ada545009422aa97c9cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:27.078262  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.905fa629 ...
	I0520 12:37:27.078280  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.905fa629: {Name:mk8e8df3bf7473c3e59d67197fa4da96247d6a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:27.078372  874942 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.905fa629 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt
	I0520 12:37:27.078448  874942 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.905fa629 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key
	I0520 12:37:27.078499  874942 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key
	I0520 12:37:27.078514  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt with IP's: []
	I0520 12:37:27.298184  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt ...
	I0520 12:37:27.298213  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt: {Name:mk2dfcf554fe922a6ee5776cd9fb5b4a108a69cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:27.298395  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key ...
	I0520 12:37:27.298409  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key: {Name:mkc3425ae95d7b09a44694b623a43120e707d763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:27.298502  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 12:37:27.298521  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 12:37:27.298533  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 12:37:27.298545  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 12:37:27.298557  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 12:37:27.298570  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 12:37:27.298581  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 12:37:27.298593  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 12:37:27.298641  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 12:37:27.298684  874942 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 12:37:27.298694  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 12:37:27.298718  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 12:37:27.298740  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:37:27.298761  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 12:37:27.298801  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:37:27.298829  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 12:37:27.298862  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 12:37:27.298881  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:37:27.299432  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:37:27.326594  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:37:27.352290  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:37:27.381601  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 12:37:27.406856  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 12:37:27.433992  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 12:37:27.456827  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:37:27.479597  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:37:27.502409  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 12:37:27.524924  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 12:37:27.547140  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:37:27.569852  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 12:37:27.586389  874942 ssh_runner.go:195] Run: openssl version
	I0520 12:37:27.594612  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 12:37:27.605833  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 12:37:27.610706  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 12:37:27.610759  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 12:37:27.617003  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 12:37:27.629753  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:37:27.640985  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:37:27.645841  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:37:27.645897  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:37:27.651895  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:37:27.662740  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 12:37:27.673662  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 12:37:27.678449  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 12:37:27.678492  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 12:37:27.684369  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 12:37:27.695699  874942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:37:27.700059  874942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:37:27.700108  874942 kubeadm.go:391] StartCluster: {Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:37:27.700194  874942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 12:37:27.700245  874942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 12:37:27.740509  874942 cri.go:89] found id: ""
	I0520 12:37:27.740597  874942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 12:37:27.750943  874942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 12:37:27.760648  874942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 12:37:27.770535  874942 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 12:37:27.770574  874942 kubeadm.go:156] found existing configuration files:
	
	I0520 12:37:27.770618  874942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 12:37:27.779837  874942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 12:37:27.779914  874942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 12:37:27.789350  874942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 12:37:27.798196  874942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 12:37:27.798250  874942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 12:37:27.807379  874942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 12:37:27.816387  874942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 12:37:27.816433  874942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 12:37:27.825766  874942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 12:37:27.835240  874942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 12:37:27.835293  874942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 12:37:27.844678  874942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 12:37:27.965545  874942 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 12:37:27.965651  874942 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 12:37:28.082992  874942 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 12:37:28.083131  874942 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 12:37:28.083233  874942 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 12:37:28.296410  874942 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 12:37:28.558599  874942 out.go:204]   - Generating certificates and keys ...
	I0520 12:37:28.558745  874942 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 12:37:28.558822  874942 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 12:37:28.558982  874942 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 12:37:28.606816  874942 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 12:37:28.675861  874942 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 12:37:28.922702  874942 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 12:37:29.011333  874942 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 12:37:29.011483  874942 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-252263 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0520 12:37:29.206710  874942 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 12:37:29.207038  874942 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-252263 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0520 12:37:29.263571  874942 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 12:37:29.504741  874942 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 12:37:29.548497  874942 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 12:37:29.548782  874942 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 12:37:29.973346  874942 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 12:37:30.377729  874942 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 12:37:30.444622  874942 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 12:37:30.545797  874942 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 12:37:30.604806  874942 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 12:37:30.604912  874942 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 12:37:30.605011  874942 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 12:37:30.606359  874942 out.go:204]   - Booting up control plane ...
	I0520 12:37:30.606459  874942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 12:37:30.606545  874942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 12:37:30.606627  874942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 12:37:30.627315  874942 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 12:37:30.628993  874942 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 12:37:30.629064  874942 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 12:37:30.771523  874942 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 12:37:30.771651  874942 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 12:37:31.272144  874942 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.019925ms
	I0520 12:37:31.272254  874942 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 12:37:37.211539  874942 kubeadm.go:309] [api-check] The API server is healthy after 5.942108324s
	I0520 12:37:37.230286  874942 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 12:37:37.241865  874942 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 12:37:37.269611  874942 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 12:37:37.269795  874942 kubeadm.go:309] [mark-control-plane] Marking the node ha-252263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 12:37:37.282536  874942 kubeadm.go:309] [bootstrap-token] Using token: p522o0.g86oczkum8u4xbvc
	I0520 12:37:37.283911  874942 out.go:204]   - Configuring RBAC rules ...
	I0520 12:37:37.284015  874942 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 12:37:37.309231  874942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 12:37:37.316659  874942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 12:37:37.319425  874942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 12:37:37.322967  874942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 12:37:37.325994  874942 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 12:37:37.620791  874942 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 12:37:38.075905  874942 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 12:37:38.623500  874942 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 12:37:38.624371  874942 kubeadm.go:309] 
	I0520 12:37:38.624439  874942 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 12:37:38.624450  874942 kubeadm.go:309] 
	I0520 12:37:38.624522  874942 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 12:37:38.624531  874942 kubeadm.go:309] 
	I0520 12:37:38.624573  874942 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 12:37:38.624628  874942 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 12:37:38.624724  874942 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 12:37:38.624750  874942 kubeadm.go:309] 
	I0520 12:37:38.624809  874942 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 12:37:38.624834  874942 kubeadm.go:309] 
	I0520 12:37:38.624904  874942 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 12:37:38.624915  874942 kubeadm.go:309] 
	I0520 12:37:38.624987  874942 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 12:37:38.625087  874942 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 12:37:38.625177  874942 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 12:37:38.625190  874942 kubeadm.go:309] 
	I0520 12:37:38.625294  874942 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 12:37:38.625393  874942 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 12:37:38.625401  874942 kubeadm.go:309] 
	I0520 12:37:38.625504  874942 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p522o0.g86oczkum8u4xbvc \
	I0520 12:37:38.625640  874942 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 \
	I0520 12:37:38.625665  874942 kubeadm.go:309] 	--control-plane 
	I0520 12:37:38.625669  874942 kubeadm.go:309] 
	I0520 12:37:38.625743  874942 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 12:37:38.625751  874942 kubeadm.go:309] 
	I0520 12:37:38.625821  874942 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p522o0.g86oczkum8u4xbvc \
	I0520 12:37:38.625905  874942 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 
	I0520 12:37:38.626730  874942 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 12:37:38.626758  874942 cni.go:84] Creating CNI manager for ""
	I0520 12:37:38.626767  874942 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 12:37:38.628330  874942 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 12:37:38.629563  874942 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 12:37:38.635041  874942 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 12:37:38.635063  874942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 12:37:38.653215  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 12:37:38.996013  874942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 12:37:38.996123  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:38.996163  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-252263 minikube.k8s.io/updated_at=2024_05_20T12_37_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=ha-252263 minikube.k8s.io/primary=true
	I0520 12:37:39.197259  874942 ops.go:34] apiserver oom_adj: -16
	I0520 12:37:39.210739  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:39.711433  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:40.211152  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:40.711747  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:41.211750  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:41.711621  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:42.210990  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:42.711310  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:43.211778  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:43.711715  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:44.210984  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:44.711031  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:45.211225  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:45.711429  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:46.211002  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:46.711628  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:47.211522  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:47.710836  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:48.211782  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:48.711468  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:49.211155  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:49.711475  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:50.211745  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:50.710800  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:51.210889  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:51.301254  874942 kubeadm.go:1107] duration metric: took 12.305202177s to wait for elevateKubeSystemPrivileges
	W0520 12:37:51.301299  874942 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 12:37:51.301309  874942 kubeadm.go:393] duration metric: took 23.601205588s to StartCluster
	I0520 12:37:51.301333  874942 settings.go:142] acquiring lock: {Name:mk4281d9011919f2beed93cad1a6e2e67e70984f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:51.301428  874942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:37:51.302351  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/kubeconfig: {Name:mk53b7329389b23289bbec52de9b56d2ade0e6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:51.302610  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 12:37:51.302630  874942 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:37:51.302659  874942 start.go:240] waiting for startup goroutines ...
	I0520 12:37:51.302673  874942 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 12:37:51.302736  874942 addons.go:69] Setting storage-provisioner=true in profile "ha-252263"
	I0520 12:37:51.302746  874942 addons.go:69] Setting default-storageclass=true in profile "ha-252263"
	I0520 12:37:51.302779  874942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-252263"
	I0520 12:37:51.302780  874942 addons.go:234] Setting addon storage-provisioner=true in "ha-252263"
	I0520 12:37:51.302911  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:37:51.302918  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:37:51.303193  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.303230  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.303282  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.303316  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.318749  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44953
	I0520 12:37:51.319064  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I0520 12:37:51.319291  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.319463  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.319831  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.319852  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.319992  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.320021  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.320183  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.320372  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:37:51.320393  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.320890  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.320934  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.322540  874942 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:37:51.322896  874942 kapi.go:59] client config for ha-252263: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt", KeyFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key", CAFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 12:37:51.323415  874942 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 12:37:51.323687  874942 addons.go:234] Setting addon default-storageclass=true in "ha-252263"
	I0520 12:37:51.323732  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:37:51.324104  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.324147  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.335881  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0520 12:37:51.336310  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.336828  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.336850  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.337248  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.337486  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:37:51.338973  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I0520 12:37:51.339236  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:51.339406  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.341013  874942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 12:37:51.339847  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.341041  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.342378  874942 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:37:51.342399  874942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 12:37:51.342418  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:51.342654  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.343273  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.343304  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.345687  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:51.346200  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:51.346222  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:51.346374  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:51.346549  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:51.346750  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:51.346933  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:51.358899  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43227
	I0520 12:37:51.359392  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.359932  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.359953  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.360245  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.360439  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:37:51.361971  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:51.362209  874942 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 12:37:51.362228  874942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 12:37:51.362248  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:51.364838  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:51.365281  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:51.365309  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:51.365425  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:51.365601  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:51.365729  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:51.365888  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:51.504230  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 12:37:51.504898  874942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:37:51.560126  874942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 12:37:52.337307  874942 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 12:37:52.337413  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.337437  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.337476  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.337498  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.337790  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.337806  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.337815  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.337823  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.337866  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.337886  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.337895  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.337906  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.337874  874942 main.go:141] libmachine: (ha-252263) DBG | Closing plugin on server side
	I0520 12:37:52.338081  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.338097  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.338187  874942 main.go:141] libmachine: (ha-252263) DBG | Closing plugin on server side
	I0520 12:37:52.338229  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.338254  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.338434  874942 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 12:37:52.338449  874942 round_trippers.go:469] Request Headers:
	I0520 12:37:52.338459  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:37:52.338470  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:37:52.351140  874942 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 12:37:52.351883  874942 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 12:37:52.351907  874942 round_trippers.go:469] Request Headers:
	I0520 12:37:52.351918  874942 round_trippers.go:473]     Content-Type: application/json
	I0520 12:37:52.351924  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:37:52.351927  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:37:52.355049  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:37:52.355311  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.355327  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.355614  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.355636  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.357649  874942 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 12:37:52.358925  874942 addons.go:505] duration metric: took 1.056246899s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 12:37:52.358973  874942 start.go:245] waiting for cluster config update ...
	I0520 12:37:52.358992  874942 start.go:254] writing updated cluster config ...
	I0520 12:37:52.360660  874942 out.go:177] 
	I0520 12:37:52.362311  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:37:52.362401  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:37:52.364079  874942 out.go:177] * Starting "ha-252263-m02" control-plane node in "ha-252263" cluster
	I0520 12:37:52.365509  874942 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:37:52.365540  874942 cache.go:56] Caching tarball of preloaded images
	I0520 12:37:52.365637  874942 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:37:52.365650  874942 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:37:52.365746  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:37:52.365933  874942 start.go:360] acquireMachinesLock for ha-252263-m02: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:37:52.365996  874942 start.go:364] duration metric: took 42.379µs to acquireMachinesLock for "ha-252263-m02"
	I0520 12:37:52.366019  874942 start.go:93] Provisioning new machine with config: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:37:52.366080  874942 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0520 12:37:52.367595  874942 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 12:37:52.367684  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:52.367715  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:52.382211  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0520 12:37:52.382574  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:52.383165  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:52.383187  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:52.383527  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:52.383710  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetMachineName
	I0520 12:37:52.383859  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:37:52.383997  874942 start.go:159] libmachine.API.Create for "ha-252263" (driver="kvm2")
	I0520 12:37:52.384018  874942 client.go:168] LocalClient.Create starting
	I0520 12:37:52.384052  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 12:37:52.384082  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:37:52.384102  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:37:52.384159  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 12:37:52.384177  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:37:52.384187  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:37:52.384203  874942 main.go:141] libmachine: Running pre-create checks...
	I0520 12:37:52.384211  874942 main.go:141] libmachine: (ha-252263-m02) Calling .PreCreateCheck
	I0520 12:37:52.384379  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetConfigRaw
	I0520 12:37:52.384792  874942 main.go:141] libmachine: Creating machine...
	I0520 12:37:52.384813  874942 main.go:141] libmachine: (ha-252263-m02) Calling .Create
	I0520 12:37:52.384981  874942 main.go:141] libmachine: (ha-252263-m02) Creating KVM machine...
	I0520 12:37:52.386304  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found existing default KVM network
	I0520 12:37:52.386441  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found existing private KVM network mk-ha-252263
	I0520 12:37:52.386551  874942 main.go:141] libmachine: (ha-252263-m02) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02 ...
	I0520 12:37:52.386572  874942 main.go:141] libmachine: (ha-252263-m02) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:37:52.386658  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:52.386550  875352 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:37:52.386737  874942 main.go:141] libmachine: (ha-252263-m02) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:37:52.644510  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:52.644386  875352 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa...
	I0520 12:37:52.885915  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:52.885793  875352 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/ha-252263-m02.rawdisk...
	I0520 12:37:52.885948  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Writing magic tar header
	I0520 12:37:52.885970  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Writing SSH key tar header
	I0520 12:37:52.885986  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:52.885927  875352 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02 ...
	I0520 12:37:52.886062  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02
	I0520 12:37:52.886099  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 12:37:52.886118  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02 (perms=drwx------)
	I0520 12:37:52.886137  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:37:52.886152  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 12:37:52.886172  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 12:37:52.886193  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:37:52.886208  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:37:52.886228  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 12:37:52.886242  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:37:52.886256  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:37:52.886269  874942 main.go:141] libmachine: (ha-252263-m02) Creating domain...
	I0520 12:37:52.886388  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:37:52.886409  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home
	I0520 12:37:52.886422  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Skipping /home - not owner
	I0520 12:37:52.887324  874942 main.go:141] libmachine: (ha-252263-m02) define libvirt domain using xml: 
	I0520 12:37:52.887347  874942 main.go:141] libmachine: (ha-252263-m02) <domain type='kvm'>
	I0520 12:37:52.887358  874942 main.go:141] libmachine: (ha-252263-m02)   <name>ha-252263-m02</name>
	I0520 12:37:52.887366  874942 main.go:141] libmachine: (ha-252263-m02)   <memory unit='MiB'>2200</memory>
	I0520 12:37:52.887378  874942 main.go:141] libmachine: (ha-252263-m02)   <vcpu>2</vcpu>
	I0520 12:37:52.887390  874942 main.go:141] libmachine: (ha-252263-m02)   <features>
	I0520 12:37:52.887399  874942 main.go:141] libmachine: (ha-252263-m02)     <acpi/>
	I0520 12:37:52.887406  874942 main.go:141] libmachine: (ha-252263-m02)     <apic/>
	I0520 12:37:52.887411  874942 main.go:141] libmachine: (ha-252263-m02)     <pae/>
	I0520 12:37:52.887417  874942 main.go:141] libmachine: (ha-252263-m02)     
	I0520 12:37:52.887423  874942 main.go:141] libmachine: (ha-252263-m02)   </features>
	I0520 12:37:52.887434  874942 main.go:141] libmachine: (ha-252263-m02)   <cpu mode='host-passthrough'>
	I0520 12:37:52.887455  874942 main.go:141] libmachine: (ha-252263-m02)   
	I0520 12:37:52.887469  874942 main.go:141] libmachine: (ha-252263-m02)   </cpu>
	I0520 12:37:52.887478  874942 main.go:141] libmachine: (ha-252263-m02)   <os>
	I0520 12:37:52.887493  874942 main.go:141] libmachine: (ha-252263-m02)     <type>hvm</type>
	I0520 12:37:52.887502  874942 main.go:141] libmachine: (ha-252263-m02)     <boot dev='cdrom'/>
	I0520 12:37:52.887507  874942 main.go:141] libmachine: (ha-252263-m02)     <boot dev='hd'/>
	I0520 12:37:52.887513  874942 main.go:141] libmachine: (ha-252263-m02)     <bootmenu enable='no'/>
	I0520 12:37:52.887519  874942 main.go:141] libmachine: (ha-252263-m02)   </os>
	I0520 12:37:52.887526  874942 main.go:141] libmachine: (ha-252263-m02)   <devices>
	I0520 12:37:52.887534  874942 main.go:141] libmachine: (ha-252263-m02)     <disk type='file' device='cdrom'>
	I0520 12:37:52.887546  874942 main.go:141] libmachine: (ha-252263-m02)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/boot2docker.iso'/>
	I0520 12:37:52.887553  874942 main.go:141] libmachine: (ha-252263-m02)       <target dev='hdc' bus='scsi'/>
	I0520 12:37:52.887559  874942 main.go:141] libmachine: (ha-252263-m02)       <readonly/>
	I0520 12:37:52.887566  874942 main.go:141] libmachine: (ha-252263-m02)     </disk>
	I0520 12:37:52.887571  874942 main.go:141] libmachine: (ha-252263-m02)     <disk type='file' device='disk'>
	I0520 12:37:52.887577  874942 main.go:141] libmachine: (ha-252263-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:37:52.887603  874942 main.go:141] libmachine: (ha-252263-m02)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/ha-252263-m02.rawdisk'/>
	I0520 12:37:52.887624  874942 main.go:141] libmachine: (ha-252263-m02)       <target dev='hda' bus='virtio'/>
	I0520 12:37:52.887631  874942 main.go:141] libmachine: (ha-252263-m02)     </disk>
	I0520 12:37:52.887638  874942 main.go:141] libmachine: (ha-252263-m02)     <interface type='network'>
	I0520 12:37:52.887645  874942 main.go:141] libmachine: (ha-252263-m02)       <source network='mk-ha-252263'/>
	I0520 12:37:52.887652  874942 main.go:141] libmachine: (ha-252263-m02)       <model type='virtio'/>
	I0520 12:37:52.887657  874942 main.go:141] libmachine: (ha-252263-m02)     </interface>
	I0520 12:37:52.887664  874942 main.go:141] libmachine: (ha-252263-m02)     <interface type='network'>
	I0520 12:37:52.887669  874942 main.go:141] libmachine: (ha-252263-m02)       <source network='default'/>
	I0520 12:37:52.887677  874942 main.go:141] libmachine: (ha-252263-m02)       <model type='virtio'/>
	I0520 12:37:52.887682  874942 main.go:141] libmachine: (ha-252263-m02)     </interface>
	I0520 12:37:52.887687  874942 main.go:141] libmachine: (ha-252263-m02)     <serial type='pty'>
	I0520 12:37:52.887699  874942 main.go:141] libmachine: (ha-252263-m02)       <target port='0'/>
	I0520 12:37:52.887712  874942 main.go:141] libmachine: (ha-252263-m02)     </serial>
	I0520 12:37:52.887722  874942 main.go:141] libmachine: (ha-252263-m02)     <console type='pty'>
	I0520 12:37:52.887733  874942 main.go:141] libmachine: (ha-252263-m02)       <target type='serial' port='0'/>
	I0520 12:37:52.887744  874942 main.go:141] libmachine: (ha-252263-m02)     </console>
	I0520 12:37:52.887754  874942 main.go:141] libmachine: (ha-252263-m02)     <rng model='virtio'>
	I0520 12:37:52.887771  874942 main.go:141] libmachine: (ha-252263-m02)       <backend model='random'>/dev/random</backend>
	I0520 12:37:52.887784  874942 main.go:141] libmachine: (ha-252263-m02)     </rng>
	I0520 12:37:52.887792  874942 main.go:141] libmachine: (ha-252263-m02)     
	I0520 12:37:52.887796  874942 main.go:141] libmachine: (ha-252263-m02)     
	I0520 12:37:52.887802  874942 main.go:141] libmachine: (ha-252263-m02)   </devices>
	I0520 12:37:52.887806  874942 main.go:141] libmachine: (ha-252263-m02) </domain>
	I0520 12:37:52.887816  874942 main.go:141] libmachine: (ha-252263-m02) 
	I0520 12:37:52.894397  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:86:3e:c8 in network default
	I0520 12:37:52.894920  874942 main.go:141] libmachine: (ha-252263-m02) Ensuring networks are active...
	I0520 12:37:52.894936  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:52.895632  874942 main.go:141] libmachine: (ha-252263-m02) Ensuring network default is active
	I0520 12:37:52.895903  874942 main.go:141] libmachine: (ha-252263-m02) Ensuring network mk-ha-252263 is active
	I0520 12:37:52.896228  874942 main.go:141] libmachine: (ha-252263-m02) Getting domain xml...
	I0520 12:37:52.896938  874942 main.go:141] libmachine: (ha-252263-m02) Creating domain...
	I0520 12:37:54.137521  874942 main.go:141] libmachine: (ha-252263-m02) Waiting to get IP...
	I0520 12:37:54.138340  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:54.138744  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:54.138800  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:54.138726  875352 retry.go:31] will retry after 192.479928ms: waiting for machine to come up
	I0520 12:37:54.333310  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:54.333806  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:54.333838  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:54.333745  875352 retry.go:31] will retry after 325.539642ms: waiting for machine to come up
	I0520 12:37:54.660916  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:54.661370  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:54.661395  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:54.661314  875352 retry.go:31] will retry after 338.837064ms: waiting for machine to come up
	I0520 12:37:55.001819  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:55.002266  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:55.002297  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:55.002214  875352 retry.go:31] will retry after 573.584149ms: waiting for machine to come up
	I0520 12:37:55.577088  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:55.577722  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:55.577755  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:55.577579  875352 retry.go:31] will retry after 487.137601ms: waiting for machine to come up
	I0520 12:37:56.066173  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:56.066713  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:56.066750  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:56.066643  875352 retry.go:31] will retry after 619.061485ms: waiting for machine to come up
	I0520 12:37:56.686886  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:56.687348  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:56.687377  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:56.687285  875352 retry.go:31] will retry after 1.172165578s: waiting for machine to come up
	I0520 12:37:57.861266  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:57.861789  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:57.861836  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:57.861748  875352 retry.go:31] will retry after 1.198369396s: waiting for machine to come up
	I0520 12:37:59.061207  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:59.061666  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:59.061695  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:59.061607  875352 retry.go:31] will retry after 1.159246595s: waiting for machine to come up
	I0520 12:38:00.222945  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:00.223295  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:00.223323  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:00.223248  875352 retry.go:31] will retry after 1.591878155s: waiting for machine to come up
	I0520 12:38:01.816669  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:01.817147  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:01.817186  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:01.817078  875352 retry.go:31] will retry after 2.342714609s: waiting for machine to come up
	I0520 12:38:04.160937  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:04.161348  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:04.161372  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:04.161308  875352 retry.go:31] will retry after 2.689545134s: waiting for machine to come up
	I0520 12:38:06.852983  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:06.853350  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:06.853381  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:06.853309  875352 retry.go:31] will retry after 3.47993687s: waiting for machine to come up
	I0520 12:38:10.334414  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:10.334773  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:10.334805  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:10.334757  875352 retry.go:31] will retry after 4.302575583s: waiting for machine to come up
	I0520 12:38:14.639801  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:14.640153  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has current primary IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:14.640188  874942 main.go:141] libmachine: (ha-252263-m02) Found IP for machine: 192.168.39.22
	I0520 12:38:14.640245  874942 main.go:141] libmachine: (ha-252263-m02) Reserving static IP address...
	I0520 12:38:14.640554  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find host DHCP lease matching {name: "ha-252263-m02", mac: "52:54:00:f8:3d:6b", ip: "192.168.39.22"} in network mk-ha-252263
	I0520 12:38:14.712950  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Getting to WaitForSSH function...
	I0520 12:38:14.712995  874942 main.go:141] libmachine: (ha-252263-m02) Reserved static IP address: 192.168.39.22
	I0520 12:38:14.713044  874942 main.go:141] libmachine: (ha-252263-m02) Waiting for SSH to be available...
	I0520 12:38:14.715636  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:14.715942  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263
	I0520 12:38:14.715971  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find defined IP address of network mk-ha-252263 interface with MAC address 52:54:00:f8:3d:6b
	I0520 12:38:14.716148  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using SSH client type: external
	I0520 12:38:14.716174  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa (-rw-------)
	I0520 12:38:14.716206  874942 main.go:141] libmachine: (ha-252263-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:38:14.716221  874942 main.go:141] libmachine: (ha-252263-m02) DBG | About to run SSH command:
	I0520 12:38:14.716240  874942 main.go:141] libmachine: (ha-252263-m02) DBG | exit 0
	I0520 12:38:14.719748  874942 main.go:141] libmachine: (ha-252263-m02) DBG | SSH cmd err, output: exit status 255: 
	I0520 12:38:14.719768  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 12:38:14.719775  874942 main.go:141] libmachine: (ha-252263-m02) DBG | command : exit 0
	I0520 12:38:14.719792  874942 main.go:141] libmachine: (ha-252263-m02) DBG | err     : exit status 255
	I0520 12:38:14.719808  874942 main.go:141] libmachine: (ha-252263-m02) DBG | output  : 
	I0520 12:38:17.720763  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Getting to WaitForSSH function...
	I0520 12:38:17.723007  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.723453  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:17.723492  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.723591  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using SSH client type: external
	I0520 12:38:17.723614  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa (-rw-------)
	I0520 12:38:17.723641  874942 main.go:141] libmachine: (ha-252263-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:38:17.723661  874942 main.go:141] libmachine: (ha-252263-m02) DBG | About to run SSH command:
	I0520 12:38:17.723681  874942 main.go:141] libmachine: (ha-252263-m02) DBG | exit 0
	I0520 12:38:17.851148  874942 main.go:141] libmachine: (ha-252263-m02) DBG | SSH cmd err, output: <nil>: 
	I0520 12:38:17.851442  874942 main.go:141] libmachine: (ha-252263-m02) KVM machine creation complete!
	I0520 12:38:17.851752  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetConfigRaw
	I0520 12:38:17.852318  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:17.852584  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:17.852759  874942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:38:17.852778  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:38:17.854013  874942 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:38:17.854029  874942 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:38:17.854035  874942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:38:17.854041  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:17.856077  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.856418  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:17.856447  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.856606  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:17.856825  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:17.856987  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:17.857132  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:17.857297  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:17.857501  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:17.857511  874942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:38:17.966235  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:38:17.966263  874942 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:38:17.966274  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:17.968639  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.968970  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:17.969001  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.969123  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:17.969315  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:17.969472  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:17.969623  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:17.969813  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:17.970030  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:17.970044  874942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:38:18.083552  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:38:18.083627  874942 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:38:18.083636  874942 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:38:18.083645  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetMachineName
	I0520 12:38:18.083940  874942 buildroot.go:166] provisioning hostname "ha-252263-m02"
	I0520 12:38:18.083972  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetMachineName
	I0520 12:38:18.084172  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.087080  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.087485  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.087510  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.087644  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.087831  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.088009  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.088189  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.088342  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:18.088519  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:18.088535  874942 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-252263-m02 && echo "ha-252263-m02" | sudo tee /etc/hostname
	I0520 12:38:18.211635  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263-m02
	
	I0520 12:38:18.211668  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.214782  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.215150  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.215178  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.215379  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.215590  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.215775  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.215943  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.216127  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:18.216294  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:18.216311  874942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-252263-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-252263-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-252263-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:38:18.332285  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:38:18.332319  874942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 12:38:18.332341  874942 buildroot.go:174] setting up certificates
	I0520 12:38:18.332361  874942 provision.go:84] configureAuth start
	I0520 12:38:18.332376  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetMachineName
	I0520 12:38:18.332703  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:38:18.335191  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.335530  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.335558  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.335676  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.337556  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.337857  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.337888  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.338030  874942 provision.go:143] copyHostCerts
	I0520 12:38:18.338068  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:38:18.338109  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 12:38:18.338122  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:38:18.338199  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 12:38:18.338333  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:38:18.338363  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 12:38:18.338374  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:38:18.338416  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 12:38:18.338483  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:38:18.338506  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 12:38:18.338514  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:38:18.338541  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 12:38:18.338610  874942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.ha-252263-m02 san=[127.0.0.1 192.168.39.22 ha-252263-m02 localhost minikube]
	I0520 12:38:18.401827  874942 provision.go:177] copyRemoteCerts
	I0520 12:38:18.401892  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:38:18.401921  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.404423  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.404727  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.404747  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.405074  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.405337  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.405507  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.405673  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:38:18.489155  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 12:38:18.489248  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:38:18.513816  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 12:38:18.513892  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 12:38:18.537782  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 12:38:18.537857  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 12:38:18.562319  874942 provision.go:87] duration metric: took 229.942119ms to configureAuth
	I0520 12:38:18.562351  874942 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:38:18.562567  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:38:18.562662  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.565464  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.565905  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.565942  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.566123  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.566451  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.566669  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.566842  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.567056  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:18.567268  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:18.567291  874942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:38:18.827916  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:38:18.827949  874942 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:38:18.827960  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetURL
	I0520 12:38:18.829240  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using libvirt version 6000000
	I0520 12:38:18.831406  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.831794  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.831823  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.831962  874942 main.go:141] libmachine: Docker is up and running!
	I0520 12:38:18.831976  874942 main.go:141] libmachine: Reticulating splines...
	I0520 12:38:18.831984  874942 client.go:171] duration metric: took 26.447954823s to LocalClient.Create
	I0520 12:38:18.832006  874942 start.go:167] duration metric: took 26.448010511s to libmachine.API.Create "ha-252263"
	I0520 12:38:18.832016  874942 start.go:293] postStartSetup for "ha-252263-m02" (driver="kvm2")
	I0520 12:38:18.832026  874942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:38:18.832043  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:18.832297  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:38:18.832328  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.834658  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.835010  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.835051  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.835160  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.835368  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.835507  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.835750  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:38:18.921789  874942 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:38:18.926130  874942 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:38:18.926160  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 12:38:18.926229  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 12:38:18.926308  874942 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 12:38:18.926319  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 12:38:18.926401  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:38:18.936277  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:38:18.959633  874942 start.go:296] duration metric: took 127.60085ms for postStartSetup
	I0520 12:38:18.959689  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetConfigRaw
	I0520 12:38:18.960282  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:38:18.963033  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.963353  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.963376  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.963606  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:38:18.963783  874942 start.go:128] duration metric: took 26.597693013s to createHost
	I0520 12:38:18.963808  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.966087  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.966481  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.966517  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.966671  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.966915  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.967077  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.967209  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.967430  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:18.967598  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:18.967608  874942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:38:19.075872  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716208699.055192413
	
	I0520 12:38:19.075899  874942 fix.go:216] guest clock: 1716208699.055192413
	I0520 12:38:19.075906  874942 fix.go:229] Guest: 2024-05-20 12:38:19.055192413 +0000 UTC Remote: 2024-05-20 12:38:18.963794268 +0000 UTC m=+83.475577267 (delta=91.398145ms)
	I0520 12:38:19.075922  874942 fix.go:200] guest clock delta is within tolerance: 91.398145ms
	I0520 12:38:19.075927  874942 start.go:83] releasing machines lock for "ha-252263-m02", held for 26.709919409s
	I0520 12:38:19.075945  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:19.076209  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:38:19.079701  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.080070  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:19.080096  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.082634  874942 out.go:177] * Found network options:
	I0520 12:38:19.084160  874942 out.go:177]   - NO_PROXY=192.168.39.182
	W0520 12:38:19.085403  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 12:38:19.085449  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:19.085975  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:19.086157  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:19.086257  874942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:38:19.086310  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	W0520 12:38:19.086320  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 12:38:19.086394  874942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:38:19.086418  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:19.088785  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.089158  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.089189  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:19.089213  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.089391  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:19.089590  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:19.089655  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:19.089678  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.089784  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:19.089864  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:19.090033  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:38:19.090411  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:19.090572  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:19.090749  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:38:19.324093  874942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:38:19.331022  874942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:38:19.331094  874942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:38:19.347892  874942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:38:19.347911  874942 start.go:494] detecting cgroup driver to use...
	I0520 12:38:19.347980  874942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:38:19.364955  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:38:19.379483  874942 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:38:19.379530  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:38:19.392802  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:38:19.405888  874942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:38:19.514514  874942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:38:19.692620  874942 docker.go:233] disabling docker service ...
	I0520 12:38:19.692698  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:38:19.707446  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:38:19.721687  874942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:38:19.838194  874942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:38:19.949936  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:38:19.964631  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:38:19.983818  874942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:38:19.983889  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:19.994815  874942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:38:19.994894  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.005752  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.016982  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.035035  874942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:38:20.046485  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.056549  874942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.073191  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.083150  874942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:38:20.092175  874942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:38:20.092230  874942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:38:20.104850  874942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:38:20.114172  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:38:20.230940  874942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:38:20.369577  874942 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:38:20.369648  874942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:38:20.374381  874942 start.go:562] Will wait 60s for crictl version
	I0520 12:38:20.374441  874942 ssh_runner.go:195] Run: which crictl
	I0520 12:38:20.378268  874942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:38:20.420213  874942 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:38:20.420283  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:38:20.447229  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:38:20.475802  874942 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:38:20.477391  874942 out.go:177]   - env NO_PROXY=192.168.39.182
	I0520 12:38:20.478647  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:38:20.481074  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:20.481427  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:20.481458  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:20.481619  874942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:38:20.485598  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:38:20.498327  874942 mustload.go:65] Loading cluster: ha-252263
	I0520 12:38:20.498517  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:38:20.498773  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:38:20.498801  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:38:20.513186  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0520 12:38:20.513621  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:38:20.514113  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:38:20.514133  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:38:20.514454  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:38:20.514641  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:38:20.516315  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:38:20.516605  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:38:20.516630  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:38:20.530533  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34165
	I0520 12:38:20.530957  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:38:20.531387  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:38:20.531408  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:38:20.531750  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:38:20.531901  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:38:20.532079  874942 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263 for IP: 192.168.39.22
	I0520 12:38:20.532092  874942 certs.go:194] generating shared ca certs ...
	I0520 12:38:20.532106  874942 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:38:20.532226  874942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 12:38:20.532269  874942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 12:38:20.532283  874942 certs.go:256] generating profile certs ...
	I0520 12:38:20.532357  874942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key
	I0520 12:38:20.532383  874942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.da923b66
	I0520 12:38:20.532397  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.da923b66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182 192.168.39.22 192.168.39.254]
	I0520 12:38:20.704724  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.da923b66 ...
	I0520 12:38:20.704764  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.da923b66: {Name:mk90854b85c58258865cd7915fa91b5b8292a209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:38:20.704946  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.da923b66 ...
	I0520 12:38:20.704968  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.da923b66: {Name:mk4f87701cc78eff0286b15f5fc1624a9aabe73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:38:20.705066  874942 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.da923b66 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt
	I0520 12:38:20.705205  874942 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.da923b66 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key
	I0520 12:38:20.705379  874942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key
	I0520 12:38:20.705399  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 12:38:20.705415  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 12:38:20.705425  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 12:38:20.705435  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 12:38:20.705448  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 12:38:20.705468  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 12:38:20.705484  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 12:38:20.705500  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 12:38:20.705560  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 12:38:20.705602  874942 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 12:38:20.705615  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 12:38:20.705648  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 12:38:20.705680  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:38:20.705713  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 12:38:20.705768  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:38:20.705807  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:38:20.705830  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 12:38:20.705848  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 12:38:20.705890  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:38:20.709247  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:38:20.709595  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:38:20.709627  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:38:20.709770  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:38:20.710174  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:38:20.710385  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:38:20.710573  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:38:20.783208  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 12:38:20.789780  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 12:38:20.800913  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 12:38:20.805394  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0520 12:38:20.816232  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 12:38:20.821031  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 12:38:20.831501  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 12:38:20.836361  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 12:38:20.846199  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 12:38:20.850364  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 12:38:20.860304  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 12:38:20.864515  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0520 12:38:20.875902  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:38:20.901107  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:38:20.924273  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:38:20.946974  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 12:38:20.969264  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 12:38:20.991963  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:38:21.014098  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:38:21.036723  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:38:21.061090  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:38:21.085430  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 12:38:21.109466  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 12:38:21.131873  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 12:38:21.147577  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0520 12:38:21.162873  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 12:38:21.178418  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 12:38:21.193875  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 12:38:21.209750  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0520 12:38:21.225684  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 12:38:21.241595  874942 ssh_runner.go:195] Run: openssl version
	I0520 12:38:21.247131  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:38:21.257262  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:38:21.261466  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:38:21.261521  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:38:21.266938  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:38:21.277544  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 12:38:21.287930  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 12:38:21.292187  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 12:38:21.292230  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 12:38:21.297613  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 12:38:21.307788  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 12:38:21.319198  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 12:38:21.323610  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 12:38:21.323660  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 12:38:21.329016  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 12:38:21.339656  874942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:38:21.343537  874942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:38:21.343585  874942 kubeadm.go:928] updating node {m02 192.168.39.22 8443 v1.30.1 crio true true} ...
	I0520 12:38:21.343664  874942 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-252263-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:38:21.343691  874942 kube-vip.go:115] generating kube-vip config ...
	I0520 12:38:21.343718  874942 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 12:38:21.359494  874942 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 12:38:21.359573  874942 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 12:38:21.359630  874942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:38:21.369004  874942 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 12:38:21.369070  874942 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 12:38:21.378284  874942 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 12:38:21.378309  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 12:38:21.378347  874942 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0520 12:38:21.378377  874942 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0520 12:38:21.378383  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 12:38:21.382550  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 12:38:21.382590  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 12:38:21.923517  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 12:38:21.923591  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 12:38:21.929603  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 12:38:21.929635  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 12:38:22.258396  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:38:22.272569  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 12:38:22.272682  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 12:38:22.277271  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 12:38:22.277302  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 12:38:22.687133  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 12:38:22.696541  874942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 12:38:22.713190  874942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:38:22.729330  874942 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 12:38:22.745861  874942 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 12:38:22.749718  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:38:22.762163  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:38:22.888499  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:38:22.905439  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:38:22.905827  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:38:22.905871  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:38:22.925249  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0520 12:38:22.925768  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:38:22.926297  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:38:22.926329  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:38:22.926648  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:38:22.926855  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:38:22.926978  874942 start.go:316] joinCluster: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:38:22.927128  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 12:38:22.927154  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:38:22.930191  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:38:22.930643  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:38:22.930670  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:38:22.931099  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:38:22.931318  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:38:22.931473  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:38:22.931610  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:38:23.081518  874942 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:38:23.081581  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5hi4iu.txjljiqwqlue37gn --discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-252263-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443"
	I0520 12:38:45.161655  874942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5hi4iu.txjljiqwqlue37gn --discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-252263-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443": (22.080039262s)
	I0520 12:38:45.161700  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 12:38:45.717850  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-252263-m02 minikube.k8s.io/updated_at=2024_05_20T12_38_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=ha-252263 minikube.k8s.io/primary=false
	I0520 12:38:45.818374  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-252263-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 12:38:45.972007  874942 start.go:318] duration metric: took 23.045022352s to joinCluster
	I0520 12:38:45.972097  874942 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:38:45.973575  874942 out.go:177] * Verifying Kubernetes components...
	I0520 12:38:45.972387  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:38:45.975129  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:38:46.205640  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:38:46.226306  874942 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:38:46.226517  874942 kapi.go:59] client config for ha-252263: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt", KeyFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key", CAFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 12:38:46.226577  874942 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.182:8443
	I0520 12:38:46.226802  874942 node_ready.go:35] waiting up to 6m0s for node "ha-252263-m02" to be "Ready" ...
	I0520 12:38:46.226943  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:46.226949  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:46.226957  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:46.226961  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:46.245572  874942 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0520 12:38:46.727739  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:46.727763  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:46.727776  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:46.727782  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:46.732127  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:47.228051  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:47.228076  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:47.228092  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:47.228097  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:47.231911  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:47.727055  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:47.727084  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:47.727095  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:47.727102  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:47.735122  874942 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 12:38:48.227827  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:48.227848  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:48.227855  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:48.227859  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:48.233298  874942 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 12:38:48.234147  874942 node_ready.go:53] node "ha-252263-m02" has status "Ready":"False"
	I0520 12:38:48.727051  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:48.727080  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:48.727089  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:48.727094  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:48.731160  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:49.227153  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:49.227178  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:49.227188  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:49.227194  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:49.230118  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:49.727080  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:49.727106  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:49.727115  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:49.727117  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:49.730235  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.227040  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:50.227067  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.227075  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.227079  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.230058  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.727562  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:50.727586  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.727597  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.727604  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.731317  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.732048  874942 node_ready.go:49] node "ha-252263-m02" has status "Ready":"True"
	I0520 12:38:50.732073  874942 node_ready.go:38] duration metric: took 4.505226722s for node "ha-252263-m02" to be "Ready" ...
	I0520 12:38:50.732084  874942 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:38:50.732190  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:38:50.732202  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.732213  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.732220  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.738362  874942 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 12:38:50.745180  874942 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.745277  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-96h5w
	I0520 12:38:50.745287  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.745298  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.745303  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.748060  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.751342  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:38:50.751363  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.751373  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.751380  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.754720  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.755641  874942 pod_ready.go:92] pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace has status "Ready":"True"
	I0520 12:38:50.755668  874942 pod_ready.go:81] duration metric: took 10.464929ms for pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.755680  874942 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.755746  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c2vkj
	I0520 12:38:50.755756  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.755765  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.755774  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.758425  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.759133  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:38:50.759150  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.759157  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.759162  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.761960  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.762415  874942 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace has status "Ready":"True"
	I0520 12:38:50.762432  874942 pod_ready.go:81] duration metric: took 6.745564ms for pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.762439  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.762484  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263
	I0520 12:38:50.762492  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.762501  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.762511  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.765276  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.765815  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:38:50.765831  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.765841  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.765846  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.769196  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.769588  874942 pod_ready.go:92] pod "etcd-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:38:50.769603  874942 pod_ready.go:81] duration metric: took 7.157596ms for pod "etcd-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.769610  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.769660  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:50.769670  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.769677  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.769680  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.773058  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.773649  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:50.773669  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.773680  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.773686  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.775958  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:51.269875  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:51.269905  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:51.269918  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:51.269924  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:51.273355  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:51.273947  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:51.273961  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:51.273969  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:51.273973  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:51.277730  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:51.770038  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:51.770062  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:51.770071  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:51.770076  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:51.773480  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:51.774205  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:51.774220  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:51.774229  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:51.774238  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:51.776847  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:52.269838  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:52.269868  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:52.269878  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:52.269882  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:52.272746  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:52.273346  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:52.273360  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:52.273368  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:52.273372  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:52.276545  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:52.770638  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:52.770660  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:52.770668  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:52.770672  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:52.775017  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:52.776237  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:52.776253  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:52.776260  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:52.776265  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:52.780136  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:52.780922  874942 pod_ready.go:102] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 12:38:53.270837  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:53.270882  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:53.270893  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:53.270899  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:53.274064  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:53.274813  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:53.274830  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:53.274838  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:53.274864  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:53.277652  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:53.770574  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:53.770600  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:53.770609  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:53.770612  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:53.773951  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:53.774812  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:53.774829  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:53.774836  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:53.774840  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:53.777582  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:54.270471  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:54.270498  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:54.270506  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:54.270511  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:54.274188  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:54.275072  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:54.275090  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:54.275098  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:54.275103  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:54.277898  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:54.769908  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:54.769932  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:54.769940  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:54.769943  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:54.773719  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:54.774388  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:54.774409  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:54.774418  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:54.774422  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:54.777209  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:55.270531  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:55.270561  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:55.270572  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:55.270578  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:55.274787  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:55.276186  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:55.276207  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:55.276218  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:55.276226  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:55.278900  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:55.279558  874942 pod_ready.go:102] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 12:38:55.770839  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:55.770878  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:55.770887  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:55.770919  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:55.774406  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:55.775128  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:55.775144  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:55.775152  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:55.775156  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:55.778049  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:56.270043  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:56.270066  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:56.270074  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:56.270080  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:56.273102  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:56.274105  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:56.274124  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:56.274136  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:56.274141  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:56.276748  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:56.770724  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:56.770760  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:56.770774  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:56.770781  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:56.774312  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:56.775245  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:56.775262  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:56.775269  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:56.775272  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:56.777640  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:57.270518  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:57.270541  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:57.270547  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:57.270551  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:57.274530  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:57.275524  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:57.275538  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:57.275545  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:57.275549  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:57.278190  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:57.770607  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:57.770631  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:57.770639  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:57.770643  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:57.774885  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:57.775642  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:57.775655  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:57.775669  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:57.775674  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:57.778361  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:57.778943  874942 pod_ready.go:102] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 12:38:58.269827  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:58.269850  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:58.269858  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:58.269861  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:58.273236  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:58.273879  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:58.273890  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:58.273898  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:58.273902  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:58.277307  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:58.770146  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:58.770172  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:58.770177  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:58.770181  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:58.773644  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:58.774773  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:58.774791  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:58.774802  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:58.774806  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:58.777474  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:59.270715  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:59.270740  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:59.270752  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:59.270760  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:59.274082  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:59.274756  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:59.274775  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:59.274783  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:59.274787  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:59.277129  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:59.769885  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:59.769908  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:59.769916  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:59.769920  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:59.774084  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:59.774920  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:59.774935  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:59.774944  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:59.774951  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:59.777504  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:00.270554  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:39:00.270576  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:00.270584  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:00.270588  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:00.273581  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:00.274145  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:00.274161  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:00.274169  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:00.274174  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:00.276507  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:00.277061  874942 pod_ready.go:102] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 12:39:00.769869  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:39:00.769895  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:00.769903  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:00.769906  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:00.773404  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:00.774101  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:00.774118  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:00.774126  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:00.774131  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:00.776849  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.269841  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:39:01.269871  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.269881  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.269893  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.274092  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:39:01.274741  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:01.274760  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.274768  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.274773  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.277070  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.770409  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:39:01.770434  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.770442  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.770445  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.773398  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.774207  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:01.774222  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.774231  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.774236  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.777238  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.777824  874942 pod_ready.go:92] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.777843  874942 pod_ready.go:81] duration metric: took 11.008226852s for pod "etcd-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.777858  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.777919  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263
	I0520 12:39:01.777926  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.777933  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.777937  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.782842  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:39:01.783669  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:01.783686  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.783696  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.783702  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.794447  874942 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 12:39:01.795140  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.795175  874942 pod_ready.go:81] duration metric: took 17.306245ms for pod "kube-apiserver-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.795191  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.795284  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263-m02
	I0520 12:39:01.795298  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.795308  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.795313  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.819716  874942 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0520 12:39:01.820481  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:01.820499  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.820508  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.820514  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.823486  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.823930  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.823951  874942 pod_ready.go:81] duration metric: took 28.750691ms for pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.823965  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.824051  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263
	I0520 12:39:01.824065  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.824075  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.824082  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.830240  874942 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 12:39:01.830951  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:01.830965  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.830973  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.830976  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.833514  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.834465  874942 pod_ready.go:92] pod "kube-controller-manager-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.834488  874942 pod_ready.go:81] duration metric: took 10.500265ms for pod "kube-controller-manager-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.834500  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-84x7f" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.834568  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84x7f
	I0520 12:39:01.834579  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.834589  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.834593  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.837125  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.837755  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:01.837767  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.837774  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.837779  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.840077  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.840524  874942 pod_ready.go:92] pod "kube-proxy-84x7f" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.840545  874942 pod_ready.go:81] duration metric: took 6.036863ms for pod "kube-proxy-84x7f" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.840557  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5zvt" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.970923  874942 request.go:629] Waited for 130.282489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5zvt
	I0520 12:39:01.970980  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5zvt
	I0520 12:39:01.970985  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.970992  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.970996  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.973934  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:02.170878  874942 request.go:629] Waited for 196.369487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:02.170941  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:02.170946  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.170959  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.170964  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.174369  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:02.174917  874942 pod_ready.go:92] pod "kube-proxy-z5zvt" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:02.174936  874942 pod_ready.go:81] duration metric: took 334.371338ms for pod "kube-proxy-z5zvt" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:02.174946  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:02.371023  874942 request.go:629] Waited for 195.999349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263
	I0520 12:39:02.371093  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263
	I0520 12:39:02.371098  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.371105  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.371109  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.374674  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:02.570602  874942 request.go:629] Waited for 195.28291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:02.570682  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:02.570690  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.570701  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.570710  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.574742  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:02.575385  874942 pod_ready.go:92] pod "kube-scheduler-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:02.575403  874942 pod_ready.go:81] duration metric: took 400.451085ms for pod "kube-scheduler-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:02.575415  874942 pod_ready.go:38] duration metric: took 11.843285919s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:39:02.575439  874942 api_server.go:52] waiting for apiserver process to appear ...
	I0520 12:39:02.575500  874942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:39:02.592758  874942 api_server.go:72] duration metric: took 16.620622173s to wait for apiserver process to appear ...
	I0520 12:39:02.592782  874942 api_server.go:88] waiting for apiserver healthz status ...
	I0520 12:39:02.592802  874942 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0520 12:39:02.597082  874942 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0520 12:39:02.597158  874942 round_trippers.go:463] GET https://192.168.39.182:8443/version
	I0520 12:39:02.597168  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.597176  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.597181  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.597994  874942 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0520 12:39:02.598158  874942 api_server.go:141] control plane version: v1.30.1
	I0520 12:39:02.598190  874942 api_server.go:131] duration metric: took 5.399467ms to wait for apiserver health ...
	I0520 12:39:02.598200  874942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 12:39:02.770583  874942 request.go:629] Waited for 172.286764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:39:02.770652  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:39:02.770657  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.770665  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.770669  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.776316  874942 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 12:39:02.781078  874942 system_pods.go:59] 17 kube-system pods found
	I0520 12:39:02.781103  874942 system_pods.go:61] "coredns-7db6d8ff4d-96h5w" [3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf] Running
	I0520 12:39:02.781108  874942 system_pods.go:61] "coredns-7db6d8ff4d-c2vkj" [a5fa83f0-abaa-4c78-8d08-124503934fb1] Running
	I0520 12:39:02.781111  874942 system_pods.go:61] "etcd-ha-252263" [d5d0140d-3bf7-4b3f-9a11-b275e9800f1d] Running
	I0520 12:39:02.781114  874942 system_pods.go:61] "etcd-ha-252263-m02" [1a626412-42d2-478b-9ebf-891abf9e9a5a] Running
	I0520 12:39:02.781117  874942 system_pods.go:61] "kindnet-8vkjc" [b222e7ad-6005-42bf-867f-40b94d584782] Running
	I0520 12:39:02.781119  874942 system_pods.go:61] "kindnet-lfz72" [dcfb2815-bac5-46fd-b65e-6fa4cbc748be] Running
	I0520 12:39:02.781122  874942 system_pods.go:61] "kube-apiserver-ha-252263" [69e7f726-e571-41dd-a16e-10f4b495d230] Running
	I0520 12:39:02.781124  874942 system_pods.go:61] "kube-apiserver-ha-252263-m02" [6cecadf0-4518-4744-aa2b-81a27c1cfb0d] Running
	I0520 12:39:02.781127  874942 system_pods.go:61] "kube-controller-manager-ha-252263" [51976a74-4436-45cc-9192-6d0af34f30b0] Running
	I0520 12:39:02.781130  874942 system_pods.go:61] "kube-controller-manager-ha-252263-m02" [72556438-654e-4070-ad00-d3e737db68dd] Running
	I0520 12:39:02.781133  874942 system_pods.go:61] "kube-proxy-84x7f" [af9df182-185d-479e-abf7-7bcb3709d039] Running
	I0520 12:39:02.781136  874942 system_pods.go:61] "kube-proxy-z5zvt" [fd9f5f1f-60ac-4567-8d5c-b2de0404623f] Running
	I0520 12:39:02.781138  874942 system_pods.go:61] "kube-scheduler-ha-252263" [a6b8dabc-a8a1-46b3-ae41-ecb026648fe3] Running
	I0520 12:39:02.781141  874942 system_pods.go:61] "kube-scheduler-ha-252263-m02" [bafebb09-b0c8-481f-8808-d4396c2b28cb] Running
	I0520 12:39:02.781144  874942 system_pods.go:61] "kube-vip-ha-252263" [6e5827b4-5a1c-4523-9282-8c901ab68b5a] Running
	I0520 12:39:02.781147  874942 system_pods.go:61] "kube-vip-ha-252263-m02" [d33ac9fa-d81e-4676-a735-76f6709c3695] Running
	I0520 12:39:02.781149  874942 system_pods.go:61] "storage-provisioner" [5db18dbf-710f-4c10-84bb-c5120c865740] Running
	I0520 12:39:02.781157  874942 system_pods.go:74] duration metric: took 182.947275ms to wait for pod list to return data ...
	I0520 12:39:02.781168  874942 default_sa.go:34] waiting for default service account to be created ...
	I0520 12:39:02.970703  874942 request.go:629] Waited for 189.443135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/default/serviceaccounts
	I0520 12:39:02.970763  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/default/serviceaccounts
	I0520 12:39:02.970767  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.970785  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.970798  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.974258  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:02.974483  874942 default_sa.go:45] found service account: "default"
	I0520 12:39:02.974499  874942 default_sa.go:55] duration metric: took 193.324555ms for default service account to be created ...
	I0520 12:39:02.974507  874942 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 12:39:03.170944  874942 request.go:629] Waited for 196.359277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:39:03.171057  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:39:03.171070  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:03.171079  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:03.171086  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:03.176098  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:39:03.180564  874942 system_pods.go:86] 17 kube-system pods found
	I0520 12:39:03.180588  874942 system_pods.go:89] "coredns-7db6d8ff4d-96h5w" [3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf] Running
	I0520 12:39:03.180593  874942 system_pods.go:89] "coredns-7db6d8ff4d-c2vkj" [a5fa83f0-abaa-4c78-8d08-124503934fb1] Running
	I0520 12:39:03.180598  874942 system_pods.go:89] "etcd-ha-252263" [d5d0140d-3bf7-4b3f-9a11-b275e9800f1d] Running
	I0520 12:39:03.180602  874942 system_pods.go:89] "etcd-ha-252263-m02" [1a626412-42d2-478b-9ebf-891abf9e9a5a] Running
	I0520 12:39:03.180605  874942 system_pods.go:89] "kindnet-8vkjc" [b222e7ad-6005-42bf-867f-40b94d584782] Running
	I0520 12:39:03.180609  874942 system_pods.go:89] "kindnet-lfz72" [dcfb2815-bac5-46fd-b65e-6fa4cbc748be] Running
	I0520 12:39:03.180615  874942 system_pods.go:89] "kube-apiserver-ha-252263" [69e7f726-e571-41dd-a16e-10f4b495d230] Running
	I0520 12:39:03.180621  874942 system_pods.go:89] "kube-apiserver-ha-252263-m02" [6cecadf0-4518-4744-aa2b-81a27c1cfb0d] Running
	I0520 12:39:03.180631  874942 system_pods.go:89] "kube-controller-manager-ha-252263" [51976a74-4436-45cc-9192-6d0af34f30b0] Running
	I0520 12:39:03.180643  874942 system_pods.go:89] "kube-controller-manager-ha-252263-m02" [72556438-654e-4070-ad00-d3e737db68dd] Running
	I0520 12:39:03.180652  874942 system_pods.go:89] "kube-proxy-84x7f" [af9df182-185d-479e-abf7-7bcb3709d039] Running
	I0520 12:39:03.180661  874942 system_pods.go:89] "kube-proxy-z5zvt" [fd9f5f1f-60ac-4567-8d5c-b2de0404623f] Running
	I0520 12:39:03.180667  874942 system_pods.go:89] "kube-scheduler-ha-252263" [a6b8dabc-a8a1-46b3-ae41-ecb026648fe3] Running
	I0520 12:39:03.180674  874942 system_pods.go:89] "kube-scheduler-ha-252263-m02" [bafebb09-b0c8-481f-8808-d4396c2b28cb] Running
	I0520 12:39:03.180678  874942 system_pods.go:89] "kube-vip-ha-252263" [6e5827b4-5a1c-4523-9282-8c901ab68b5a] Running
	I0520 12:39:03.180684  874942 system_pods.go:89] "kube-vip-ha-252263-m02" [d33ac9fa-d81e-4676-a735-76f6709c3695] Running
	I0520 12:39:03.180690  874942 system_pods.go:89] "storage-provisioner" [5db18dbf-710f-4c10-84bb-c5120c865740] Running
	I0520 12:39:03.180698  874942 system_pods.go:126] duration metric: took 206.18632ms to wait for k8s-apps to be running ...
	I0520 12:39:03.180706  874942 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 12:39:03.180763  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:39:03.196187  874942 system_svc.go:56] duration metric: took 15.474523ms WaitForService to wait for kubelet
	I0520 12:39:03.196214  874942 kubeadm.go:576] duration metric: took 17.224081773s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:39:03.196232  874942 node_conditions.go:102] verifying NodePressure condition ...
	I0520 12:39:03.370582  874942 request.go:629] Waited for 174.273669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes
	I0520 12:39:03.370675  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes
	I0520 12:39:03.370686  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:03.370697  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:03.370704  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:03.374520  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:03.375386  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:39:03.375427  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:39:03.375446  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:39:03.375451  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:39:03.375457  874942 node_conditions.go:105] duration metric: took 179.220453ms to run NodePressure ...
	I0520 12:39:03.375473  874942 start.go:240] waiting for startup goroutines ...
	I0520 12:39:03.375516  874942 start.go:254] writing updated cluster config ...
	I0520 12:39:03.380755  874942 out.go:177] 
	I0520 12:39:03.382323  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:39:03.382431  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:39:03.384034  874942 out.go:177] * Starting "ha-252263-m03" control-plane node in "ha-252263" cluster
	I0520 12:39:03.385228  874942 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:39:03.385251  874942 cache.go:56] Caching tarball of preloaded images
	I0520 12:39:03.385362  874942 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:39:03.385375  874942 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:39:03.385480  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:39:03.385652  874942 start.go:360] acquireMachinesLock for ha-252263-m03: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:39:03.385711  874942 start.go:364] duration metric: took 33.926µs to acquireMachinesLock for "ha-252263-m03"
	I0520 12:39:03.385736  874942 start.go:93] Provisioning new machine with config: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:39:03.385844  874942 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0520 12:39:03.387315  874942 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 12:39:03.387412  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:39:03.387455  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:39:03.403199  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0520 12:39:03.403675  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:39:03.404220  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:39:03.404240  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:39:03.404581  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:39:03.404800  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetMachineName
	I0520 12:39:03.404971  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:03.405139  874942 start.go:159] libmachine.API.Create for "ha-252263" (driver="kvm2")
	I0520 12:39:03.405162  874942 client.go:168] LocalClient.Create starting
	I0520 12:39:03.405188  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 12:39:03.405219  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:39:03.405235  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:39:03.405286  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 12:39:03.405304  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:39:03.405313  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:39:03.405328  874942 main.go:141] libmachine: Running pre-create checks...
	I0520 12:39:03.405335  874942 main.go:141] libmachine: (ha-252263-m03) Calling .PreCreateCheck
	I0520 12:39:03.405544  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetConfigRaw
	I0520 12:39:03.405904  874942 main.go:141] libmachine: Creating machine...
	I0520 12:39:03.405917  874942 main.go:141] libmachine: (ha-252263-m03) Calling .Create
	I0520 12:39:03.406065  874942 main.go:141] libmachine: (ha-252263-m03) Creating KVM machine...
	I0520 12:39:03.407281  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found existing default KVM network
	I0520 12:39:03.407402  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found existing private KVM network mk-ha-252263
	I0520 12:39:03.407509  874942 main.go:141] libmachine: (ha-252263-m03) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03 ...
	I0520 12:39:03.407545  874942 main.go:141] libmachine: (ha-252263-m03) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:39:03.407598  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:03.407496  875716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:39:03.407683  874942 main.go:141] libmachine: (ha-252263-m03) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:39:03.671079  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:03.670953  875716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa...
	I0520 12:39:03.886224  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:03.886075  875716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/ha-252263-m03.rawdisk...
	I0520 12:39:03.886268  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Writing magic tar header
	I0520 12:39:03.886284  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Writing SSH key tar header
	I0520 12:39:03.886302  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:03.886229  875716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03 ...
	I0520 12:39:03.886399  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03
	I0520 12:39:03.886431  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 12:39:03.886445  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03 (perms=drwx------)
	I0520 12:39:03.886464  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:39:03.886474  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 12:39:03.886480  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:39:03.886491  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 12:39:03.886497  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:39:03.886506  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 12:39:03.886512  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:39:03.886525  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:39:03.886538  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home
	I0520 12:39:03.886553  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Skipping /home - not owner
	I0520 12:39:03.886567  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:39:03.886579  874942 main.go:141] libmachine: (ha-252263-m03) Creating domain...
	I0520 12:39:03.887530  874942 main.go:141] libmachine: (ha-252263-m03) define libvirt domain using xml: 
	I0520 12:39:03.887554  874942 main.go:141] libmachine: (ha-252263-m03) <domain type='kvm'>
	I0520 12:39:03.887564  874942 main.go:141] libmachine: (ha-252263-m03)   <name>ha-252263-m03</name>
	I0520 12:39:03.887571  874942 main.go:141] libmachine: (ha-252263-m03)   <memory unit='MiB'>2200</memory>
	I0520 12:39:03.887581  874942 main.go:141] libmachine: (ha-252263-m03)   <vcpu>2</vcpu>
	I0520 12:39:03.887592  874942 main.go:141] libmachine: (ha-252263-m03)   <features>
	I0520 12:39:03.887603  874942 main.go:141] libmachine: (ha-252263-m03)     <acpi/>
	I0520 12:39:03.887613  874942 main.go:141] libmachine: (ha-252263-m03)     <apic/>
	I0520 12:39:03.887631  874942 main.go:141] libmachine: (ha-252263-m03)     <pae/>
	I0520 12:39:03.887642  874942 main.go:141] libmachine: (ha-252263-m03)     
	I0520 12:39:03.887654  874942 main.go:141] libmachine: (ha-252263-m03)   </features>
	I0520 12:39:03.887666  874942 main.go:141] libmachine: (ha-252263-m03)   <cpu mode='host-passthrough'>
	I0520 12:39:03.887675  874942 main.go:141] libmachine: (ha-252263-m03)   
	I0520 12:39:03.887686  874942 main.go:141] libmachine: (ha-252263-m03)   </cpu>
	I0520 12:39:03.887694  874942 main.go:141] libmachine: (ha-252263-m03)   <os>
	I0520 12:39:03.887722  874942 main.go:141] libmachine: (ha-252263-m03)     <type>hvm</type>
	I0520 12:39:03.887735  874942 main.go:141] libmachine: (ha-252263-m03)     <boot dev='cdrom'/>
	I0520 12:39:03.887746  874942 main.go:141] libmachine: (ha-252263-m03)     <boot dev='hd'/>
	I0520 12:39:03.887757  874942 main.go:141] libmachine: (ha-252263-m03)     <bootmenu enable='no'/>
	I0520 12:39:03.887766  874942 main.go:141] libmachine: (ha-252263-m03)   </os>
	I0520 12:39:03.887776  874942 main.go:141] libmachine: (ha-252263-m03)   <devices>
	I0520 12:39:03.887787  874942 main.go:141] libmachine: (ha-252263-m03)     <disk type='file' device='cdrom'>
	I0520 12:39:03.887819  874942 main.go:141] libmachine: (ha-252263-m03)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/boot2docker.iso'/>
	I0520 12:39:03.887841  874942 main.go:141] libmachine: (ha-252263-m03)       <target dev='hdc' bus='scsi'/>
	I0520 12:39:03.887858  874942 main.go:141] libmachine: (ha-252263-m03)       <readonly/>
	I0520 12:39:03.887874  874942 main.go:141] libmachine: (ha-252263-m03)     </disk>
	I0520 12:39:03.887892  874942 main.go:141] libmachine: (ha-252263-m03)     <disk type='file' device='disk'>
	I0520 12:39:03.887910  874942 main.go:141] libmachine: (ha-252263-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:39:03.887927  874942 main.go:141] libmachine: (ha-252263-m03)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/ha-252263-m03.rawdisk'/>
	I0520 12:39:03.887937  874942 main.go:141] libmachine: (ha-252263-m03)       <target dev='hda' bus='virtio'/>
	I0520 12:39:03.887945  874942 main.go:141] libmachine: (ha-252263-m03)     </disk>
	I0520 12:39:03.887956  874942 main.go:141] libmachine: (ha-252263-m03)     <interface type='network'>
	I0520 12:39:03.887969  874942 main.go:141] libmachine: (ha-252263-m03)       <source network='mk-ha-252263'/>
	I0520 12:39:03.887981  874942 main.go:141] libmachine: (ha-252263-m03)       <model type='virtio'/>
	I0520 12:39:03.888001  874942 main.go:141] libmachine: (ha-252263-m03)     </interface>
	I0520 12:39:03.888015  874942 main.go:141] libmachine: (ha-252263-m03)     <interface type='network'>
	I0520 12:39:03.888027  874942 main.go:141] libmachine: (ha-252263-m03)       <source network='default'/>
	I0520 12:39:03.888038  874942 main.go:141] libmachine: (ha-252263-m03)       <model type='virtio'/>
	I0520 12:39:03.888048  874942 main.go:141] libmachine: (ha-252263-m03)     </interface>
	I0520 12:39:03.888055  874942 main.go:141] libmachine: (ha-252263-m03)     <serial type='pty'>
	I0520 12:39:03.888067  874942 main.go:141] libmachine: (ha-252263-m03)       <target port='0'/>
	I0520 12:39:03.888074  874942 main.go:141] libmachine: (ha-252263-m03)     </serial>
	I0520 12:39:03.888087  874942 main.go:141] libmachine: (ha-252263-m03)     <console type='pty'>
	I0520 12:39:03.888100  874942 main.go:141] libmachine: (ha-252263-m03)       <target type='serial' port='0'/>
	I0520 12:39:03.888128  874942 main.go:141] libmachine: (ha-252263-m03)     </console>
	I0520 12:39:03.888143  874942 main.go:141] libmachine: (ha-252263-m03)     <rng model='virtio'>
	I0520 12:39:03.888159  874942 main.go:141] libmachine: (ha-252263-m03)       <backend model='random'>/dev/random</backend>
	I0520 12:39:03.888175  874942 main.go:141] libmachine: (ha-252263-m03)     </rng>
	I0520 12:39:03.888187  874942 main.go:141] libmachine: (ha-252263-m03)     
	I0520 12:39:03.888197  874942 main.go:141] libmachine: (ha-252263-m03)     
	I0520 12:39:03.888208  874942 main.go:141] libmachine: (ha-252263-m03)   </devices>
	I0520 12:39:03.888219  874942 main.go:141] libmachine: (ha-252263-m03) </domain>
	I0520 12:39:03.888233  874942 main.go:141] libmachine: (ha-252263-m03) 
	I0520 12:39:03.895571  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:c1:14:2a in network default
	I0520 12:39:03.896226  874942 main.go:141] libmachine: (ha-252263-m03) Ensuring networks are active...
	I0520 12:39:03.896251  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:03.896881  874942 main.go:141] libmachine: (ha-252263-m03) Ensuring network default is active
	I0520 12:39:03.897179  874942 main.go:141] libmachine: (ha-252263-m03) Ensuring network mk-ha-252263 is active
	I0520 12:39:03.897566  874942 main.go:141] libmachine: (ha-252263-m03) Getting domain xml...
	I0520 12:39:03.898240  874942 main.go:141] libmachine: (ha-252263-m03) Creating domain...
	I0520 12:39:05.105433  874942 main.go:141] libmachine: (ha-252263-m03) Waiting to get IP...
	I0520 12:39:05.106218  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:05.106605  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:05.106663  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:05.106602  875716 retry.go:31] will retry after 189.118887ms: waiting for machine to come up
	I0520 12:39:05.296891  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:05.297288  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:05.297311  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:05.297271  875716 retry.go:31] will retry after 317.145066ms: waiting for machine to come up
	I0520 12:39:05.615752  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:05.616215  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:05.616249  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:05.616173  875716 retry.go:31] will retry after 447.616745ms: waiting for machine to come up
	I0520 12:39:06.065768  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:06.066232  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:06.066261  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:06.066176  875716 retry.go:31] will retry after 393.855692ms: waiting for machine to come up
	I0520 12:39:06.461797  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:06.462222  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:06.462251  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:06.462182  875716 retry.go:31] will retry after 722.017106ms: waiting for machine to come up
	I0520 12:39:07.186267  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:07.186837  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:07.186893  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:07.186781  875716 retry.go:31] will retry after 812.507046ms: waiting for machine to come up
	I0520 12:39:08.001315  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:08.001815  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:08.001846  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:08.001747  875716 retry.go:31] will retry after 1.17680348s: waiting for machine to come up
	I0520 12:39:09.180416  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:09.180898  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:09.180936  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:09.180842  875716 retry.go:31] will retry after 1.036373954s: waiting for machine to come up
	I0520 12:39:10.218911  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:10.219415  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:10.219449  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:10.219363  875716 retry.go:31] will retry after 1.804364122s: waiting for machine to come up
	I0520 12:39:12.025429  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:12.025849  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:12.025872  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:12.025805  875716 retry.go:31] will retry after 1.662611515s: waiting for machine to come up
	I0520 12:39:13.690240  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:13.690705  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:13.690737  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:13.690645  875716 retry.go:31] will retry after 2.645373784s: waiting for machine to come up
	I0520 12:39:16.337189  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:16.337570  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:16.337604  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:16.337513  875716 retry.go:31] will retry after 2.633391538s: waiting for machine to come up
	I0520 12:39:18.972698  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:18.973123  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:18.973152  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:18.973069  875716 retry.go:31] will retry after 3.486895075s: waiting for machine to come up
	I0520 12:39:22.461839  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:22.462465  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:22.462502  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:22.462423  875716 retry.go:31] will retry after 4.228316503s: waiting for machine to come up
	I0520 12:39:26.694705  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:26.695188  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has current primary IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:26.695206  874942 main.go:141] libmachine: (ha-252263-m03) Found IP for machine: 192.168.39.60
	I0520 12:39:26.695220  874942 main.go:141] libmachine: (ha-252263-m03) Reserving static IP address...
	I0520 12:39:26.695643  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find host DHCP lease matching {name: "ha-252263-m03", mac: "52:54:00:98:d8:f8", ip: "192.168.39.60"} in network mk-ha-252263
	I0520 12:39:26.769721  874942 main.go:141] libmachine: (ha-252263-m03) Reserved static IP address: 192.168.39.60
	I0520 12:39:26.769772  874942 main.go:141] libmachine: (ha-252263-m03) Waiting for SSH to be available...
	I0520 12:39:26.769782  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Getting to WaitForSSH function...
	I0520 12:39:26.772161  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:26.772548  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263
	I0520 12:39:26.772580  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find defined IP address of network mk-ha-252263 interface with MAC address 52:54:00:98:d8:f8
	I0520 12:39:26.772762  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using SSH client type: external
	I0520 12:39:26.772793  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa (-rw-------)
	I0520 12:39:26.772827  874942 main.go:141] libmachine: (ha-252263-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:39:26.772842  874942 main.go:141] libmachine: (ha-252263-m03) DBG | About to run SSH command:
	I0520 12:39:26.772861  874942 main.go:141] libmachine: (ha-252263-m03) DBG | exit 0
	I0520 12:39:26.776329  874942 main.go:141] libmachine: (ha-252263-m03) DBG | SSH cmd err, output: exit status 255: 
	I0520 12:39:26.776354  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 12:39:26.776368  874942 main.go:141] libmachine: (ha-252263-m03) DBG | command : exit 0
	I0520 12:39:26.776380  874942 main.go:141] libmachine: (ha-252263-m03) DBG | err     : exit status 255
	I0520 12:39:26.776390  874942 main.go:141] libmachine: (ha-252263-m03) DBG | output  : 
	I0520 12:39:29.777276  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Getting to WaitForSSH function...
	I0520 12:39:29.779672  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:29.780071  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:29.780104  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:29.780276  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using SSH client type: external
	I0520 12:39:29.780305  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa (-rw-------)
	I0520 12:39:29.780336  874942 main.go:141] libmachine: (ha-252263-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:39:29.780361  874942 main.go:141] libmachine: (ha-252263-m03) DBG | About to run SSH command:
	I0520 12:39:29.780380  874942 main.go:141] libmachine: (ha-252263-m03) DBG | exit 0
	I0520 12:39:29.902605  874942 main.go:141] libmachine: (ha-252263-m03) DBG | SSH cmd err, output: <nil>: 
	I0520 12:39:29.902890  874942 main.go:141] libmachine: (ha-252263-m03) KVM machine creation complete!
	I0520 12:39:29.903225  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetConfigRaw
	I0520 12:39:29.903833  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:29.904169  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:29.904395  874942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:39:29.904409  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:39:29.905638  874942 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:39:29.905652  874942 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:39:29.905658  874942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:39:29.905666  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:29.907571  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:29.907934  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:29.907968  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:29.908118  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:29.908283  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:29.908447  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:29.908603  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:29.908771  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:29.909043  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:29.909063  874942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:39:30.010161  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:39:30.010190  874942 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:39:30.010201  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.012815  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.013159  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.013185  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.013310  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.013546  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.013709  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.013837  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.013986  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:30.014145  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:30.014154  874942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:39:30.115542  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:39:30.115622  874942 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:39:30.115631  874942 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:39:30.115641  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetMachineName
	I0520 12:39:30.115952  874942 buildroot.go:166] provisioning hostname "ha-252263-m03"
	I0520 12:39:30.115985  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetMachineName
	I0520 12:39:30.116212  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.118895  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.119439  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.119467  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.119612  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.119825  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.119969  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.120096  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.120294  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:30.120465  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:30.120478  874942 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-252263-m03 && echo "ha-252263-m03" | sudo tee /etc/hostname
	I0520 12:39:30.237531  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263-m03
	
	I0520 12:39:30.237558  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.240315  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.240676  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.240706  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.240915  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.241108  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.241259  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.241373  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.241633  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:30.241807  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:30.241825  874942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-252263-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-252263-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-252263-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:39:30.352476  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:39:30.352505  874942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 12:39:30.352522  874942 buildroot.go:174] setting up certificates
	I0520 12:39:30.352530  874942 provision.go:84] configureAuth start
	I0520 12:39:30.352540  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetMachineName
	I0520 12:39:30.352840  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:39:30.355295  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.355699  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.355725  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.355876  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.358109  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.358528  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.358557  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.358698  874942 provision.go:143] copyHostCerts
	I0520 12:39:30.358733  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:39:30.358794  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 12:39:30.358806  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:39:30.358902  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 12:39:30.358998  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:39:30.359023  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 12:39:30.359038  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:39:30.359077  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 12:39:30.359146  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:39:30.359171  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 12:39:30.359181  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:39:30.359216  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 12:39:30.359278  874942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.ha-252263-m03 san=[127.0.0.1 192.168.39.60 ha-252263-m03 localhost minikube]
	I0520 12:39:30.469167  874942 provision.go:177] copyRemoteCerts
	I0520 12:39:30.469224  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:39:30.469251  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.471791  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.472232  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.472256  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.472471  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.472658  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.472808  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.472917  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:39:30.557144  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 12:39:30.557205  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 12:39:30.585069  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 12:39:30.585153  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 12:39:30.612358  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 12:39:30.612431  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:39:30.638936  874942 provision.go:87] duration metric: took 286.390722ms to configureAuth
	I0520 12:39:30.638969  874942 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:39:30.639205  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:39:30.639292  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.642201  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.642549  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.642578  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.642744  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.642974  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.643162  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.643313  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.643509  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:30.643682  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:30.643704  874942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:39:30.918204  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:39:30.918247  874942 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:39:30.918264  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetURL
	I0520 12:39:30.919832  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using libvirt version 6000000
	I0520 12:39:30.922674  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.923095  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.923137  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.923326  874942 main.go:141] libmachine: Docker is up and running!
	I0520 12:39:30.923338  874942 main.go:141] libmachine: Reticulating splines...
	I0520 12:39:30.923346  874942 client.go:171] duration metric: took 27.518176652s to LocalClient.Create
	I0520 12:39:30.923372  874942 start.go:167] duration metric: took 27.518234415s to libmachine.API.Create "ha-252263"
	I0520 12:39:30.923386  874942 start.go:293] postStartSetup for "ha-252263-m03" (driver="kvm2")
	I0520 12:39:30.923403  874942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:39:30.923426  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:30.923669  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:39:30.923705  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.925871  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.926250  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.926275  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.926427  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.926580  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.926788  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.926941  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:39:31.004832  874942 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:39:31.009213  874942 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:39:31.009238  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 12:39:31.009302  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 12:39:31.009388  874942 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 12:39:31.009399  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 12:39:31.009502  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:39:31.018921  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:39:31.041753  874942 start.go:296] duration metric: took 118.352566ms for postStartSetup
	I0520 12:39:31.041802  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetConfigRaw
	I0520 12:39:31.042324  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:39:31.045019  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.045387  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.045412  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.045723  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:39:31.045972  874942 start.go:128] duration metric: took 27.660113785s to createHost
	I0520 12:39:31.046007  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:31.048377  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.048756  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.048789  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.048924  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:31.049136  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:31.049311  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:31.049478  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:31.049653  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:31.049859  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:31.049874  874942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:39:31.152020  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716208771.129480022
	
	I0520 12:39:31.152044  874942 fix.go:216] guest clock: 1716208771.129480022
	I0520 12:39:31.152053  874942 fix.go:229] Guest: 2024-05-20 12:39:31.129480022 +0000 UTC Remote: 2024-05-20 12:39:31.045989813 +0000 UTC m=+155.557772815 (delta=83.490209ms)
	I0520 12:39:31.152077  874942 fix.go:200] guest clock delta is within tolerance: 83.490209ms
	I0520 12:39:31.152084  874942 start.go:83] releasing machines lock for "ha-252263-m03", held for 27.766362061s
	I0520 12:39:31.152108  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:31.152411  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:39:31.154957  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.155385  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.155419  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.157319  874942 out.go:177] * Found network options:
	I0520 12:39:31.158655  874942 out.go:177]   - NO_PROXY=192.168.39.182,192.168.39.22
	W0520 12:39:31.159809  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 12:39:31.159828  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 12:39:31.159842  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:31.160356  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:31.160575  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:31.160676  874942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:39:31.160721  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	W0520 12:39:31.160754  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 12:39:31.160788  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 12:39:31.160859  874942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:39:31.160881  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:31.163394  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.163529  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.163791  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.163819  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.163955  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:31.164040  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.164061  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.164140  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:31.164228  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:31.164320  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:31.164386  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:31.164455  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:39:31.164504  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:31.164643  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:39:31.394977  874942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:39:31.401332  874942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:39:31.401415  874942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:39:31.418045  874942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:39:31.418070  874942 start.go:494] detecting cgroup driver to use...
	I0520 12:39:31.418146  874942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:39:31.435442  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:39:31.449967  874942 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:39:31.450040  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:39:31.463884  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:39:31.478183  874942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:39:31.605461  874942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:39:31.753952  874942 docker.go:233] disabling docker service ...
	I0520 12:39:31.754030  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:39:31.768796  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:39:31.781871  874942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:39:31.923469  874942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:39:32.048131  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:39:32.061578  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:39:32.080250  874942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:39:32.080322  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.091344  874942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:39:32.091412  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.102979  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.114019  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.124736  874942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:39:32.135603  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.149479  874942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.168430  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.180071  874942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:39:32.190436  874942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:39:32.190503  874942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:39:32.204611  874942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:39:32.214110  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:39:32.344192  874942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:39:32.481893  874942 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:39:32.481977  874942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:39:32.487357  874942 start.go:562] Will wait 60s for crictl version
	I0520 12:39:32.487426  874942 ssh_runner.go:195] Run: which crictl
	I0520 12:39:32.491658  874942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:39:32.532074  874942 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:39:32.532178  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:39:32.562070  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:39:32.593794  874942 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:39:32.595067  874942 out.go:177]   - env NO_PROXY=192.168.39.182
	I0520 12:39:32.596194  874942 out.go:177]   - env NO_PROXY=192.168.39.182,192.168.39.22
	I0520 12:39:32.597283  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:39:32.599980  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:32.600292  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:32.600322  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:32.600478  874942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:39:32.605498  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:39:32.621055  874942 mustload.go:65] Loading cluster: ha-252263
	I0520 12:39:32.621295  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:39:32.621555  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:39:32.621605  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:39:32.637339  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0520 12:39:32.637773  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:39:32.638218  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:39:32.638241  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:39:32.638541  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:39:32.638738  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:39:32.640317  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:39:32.640707  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:39:32.640754  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:39:32.655112  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0520 12:39:32.655469  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:39:32.655876  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:39:32.655898  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:39:32.656237  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:39:32.656449  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:39:32.656611  874942 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263 for IP: 192.168.39.60
	I0520 12:39:32.656624  874942 certs.go:194] generating shared ca certs ...
	I0520 12:39:32.656643  874942 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:39:32.656761  874942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 12:39:32.656808  874942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 12:39:32.656817  874942 certs.go:256] generating profile certs ...
	I0520 12:39:32.656891  874942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key
	I0520 12:39:32.656915  874942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.643bcb5d
	I0520 12:39:32.656928  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.643bcb5d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182 192.168.39.22 192.168.39.60 192.168.39.254]
	I0520 12:39:32.811740  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.643bcb5d ...
	I0520 12:39:32.811772  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.643bcb5d: {Name:mk2490347f6aab00b81e510d8c0a07675811ea03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:39:32.811936  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.643bcb5d ...
	I0520 12:39:32.811947  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.643bcb5d: {Name:mkffe5436ecc0b97d71ed455d88101b1f79fe6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:39:32.812012  874942 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.643bcb5d -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt
	I0520 12:39:32.812145  874942 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.643bcb5d -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key
	I0520 12:39:32.812273  874942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key
	I0520 12:39:32.812289  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 12:39:32.812302  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 12:39:32.812315  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 12:39:32.812327  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 12:39:32.812340  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 12:39:32.812352  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 12:39:32.812360  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 12:39:32.812370  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 12:39:32.812417  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 12:39:32.812443  874942 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 12:39:32.812452  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 12:39:32.812474  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 12:39:32.812495  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:39:32.812514  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 12:39:32.812549  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:39:32.812576  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:39:32.812589  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 12:39:32.812601  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 12:39:32.812637  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:39:32.816229  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:39:32.816613  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:39:32.816653  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:39:32.816810  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:39:32.817016  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:39:32.817151  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:39:32.817299  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:39:32.891183  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 12:39:32.898524  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 12:39:32.910270  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 12:39:32.914810  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0520 12:39:32.926151  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 12:39:32.930611  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 12:39:32.941555  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 12:39:32.946145  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 12:39:32.956373  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 12:39:32.960354  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 12:39:32.970797  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 12:39:32.974673  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0520 12:39:32.984778  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:39:33.012166  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:39:33.036483  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:39:33.061487  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 12:39:33.089543  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0520 12:39:33.115462  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:39:33.139446  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:39:33.166323  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:39:33.192642  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:39:33.217038  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 12:39:33.241040  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 12:39:33.265169  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 12:39:33.281034  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0520 12:39:33.297163  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 12:39:33.313889  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 12:39:33.330334  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 12:39:33.347544  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0520 12:39:33.364181  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 12:39:33.380863  874942 ssh_runner.go:195] Run: openssl version
	I0520 12:39:33.386771  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:39:33.397901  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:39:33.402622  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:39:33.402687  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:39:33.408489  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:39:33.419751  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 12:39:33.430108  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 12:39:33.434929  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 12:39:33.434970  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 12:39:33.440729  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 12:39:33.452477  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 12:39:33.463377  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 12:39:33.467994  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 12:39:33.468049  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 12:39:33.473960  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 12:39:33.484269  874942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:39:33.488247  874942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:39:33.488301  874942 kubeadm.go:928] updating node {m03 192.168.39.60 8443 v1.30.1 crio true true} ...
	I0520 12:39:33.488395  874942 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-252263-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:39:33.488431  874942 kube-vip.go:115] generating kube-vip config ...
	I0520 12:39:33.488467  874942 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 12:39:33.504248  874942 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 12:39:33.504352  874942 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 12:39:33.504406  874942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:39:33.513663  874942 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 12:39:33.513719  874942 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 12:39:33.522595  874942 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 12:39:33.522621  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 12:39:33.522635  874942 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 12:39:33.522640  874942 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 12:39:33.522655  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 12:39:33.522685  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:39:33.522696  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 12:39:33.522751  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 12:39:33.527003  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 12:39:33.527034  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 12:39:33.554916  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 12:39:33.555021  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 12:39:33.554928  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 12:39:33.555083  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 12:39:33.590624  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 12:39:33.590665  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 12:39:34.423074  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 12:39:34.432920  874942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 12:39:34.449539  874942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:39:34.466654  874942 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 12:39:34.483572  874942 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 12:39:34.487390  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:39:34.500354  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:39:34.625707  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:39:34.645038  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:39:34.645615  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:39:34.645683  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:39:34.663218  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0520 12:39:34.663759  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:39:34.664283  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:39:34.664307  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:39:34.664619  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:39:34.664875  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:39:34.665068  874942 start.go:316] joinCluster: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:39:34.665191  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 12:39:34.665222  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:39:34.668649  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:39:34.669191  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:39:34.669220  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:39:34.669393  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:39:34.669566  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:39:34.669714  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:39:34.669907  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:39:34.916769  874942 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:39:34.916839  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pu2q37.5isfc5ba65e0sin1 --discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-252263-m03 --control-plane --apiserver-advertise-address=192.168.39.60 --apiserver-bind-port=8443"
	I0520 12:39:58.501906  874942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pu2q37.5isfc5ba65e0sin1 --discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-252263-m03 --control-plane --apiserver-advertise-address=192.168.39.60 --apiserver-bind-port=8443": (23.585033533s)
	I0520 12:39:58.501957  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 12:39:59.121244  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-252263-m03 minikube.k8s.io/updated_at=2024_05_20T12_39_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=ha-252263 minikube.k8s.io/primary=false
	I0520 12:39:59.233711  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-252263-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 12:39:59.359715  874942 start.go:318] duration metric: took 24.694639977s to joinCluster
	I0520 12:39:59.359826  874942 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:39:59.360194  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:39:59.361582  874942 out.go:177] * Verifying Kubernetes components...
	I0520 12:39:59.362954  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:39:59.660541  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:39:59.730100  874942 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:39:59.730552  874942 kapi.go:59] client config for ha-252263: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt", KeyFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key", CAFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 12:39:59.730659  874942 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.182:8443
	I0520 12:39:59.731013  874942 node_ready.go:35] waiting up to 6m0s for node "ha-252263-m03" to be "Ready" ...
	I0520 12:39:59.731135  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:39:59.731148  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:59.731163  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:59.731171  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:59.733785  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:00.231834  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:00.231857  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:00.231865  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:00.231869  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:00.235785  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:00.731900  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:00.731924  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:00.731932  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:00.731936  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:00.736011  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:01.231724  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:01.231776  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:01.231788  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:01.231797  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:01.236497  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:01.731514  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:01.731537  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:01.731546  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:01.731550  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:01.736001  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:01.736882  874942 node_ready.go:53] node "ha-252263-m03" has status "Ready":"False"
	I0520 12:40:02.232193  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:02.232222  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.232232  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.232236  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.235025  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:02.731753  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:02.731783  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.731794  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.731802  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.735081  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:02.735826  874942 node_ready.go:49] node "ha-252263-m03" has status "Ready":"True"
	I0520 12:40:02.735847  874942 node_ready.go:38] duration metric: took 3.004807659s for node "ha-252263-m03" to be "Ready" ...
	I0520 12:40:02.735857  874942 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:40:02.735920  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:02.735928  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.735936  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.735943  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.743026  874942 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 12:40:02.751704  874942 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.751810  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-96h5w
	I0520 12:40:02.751820  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.751829  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.751836  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.754604  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:02.755245  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:02.755263  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.755274  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.755280  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.758437  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:02.759209  874942 pod_ready.go:92] pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:02.759233  874942 pod_ready.go:81] duration metric: took 7.496777ms for pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.759246  874942 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.759320  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c2vkj
	I0520 12:40:02.759331  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.759341  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.759347  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.761860  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:02.762510  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:02.762524  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.762532  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.762535  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.765184  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:02.765768  874942 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:02.765785  874942 pod_ready.go:81] duration metric: took 6.527198ms for pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.765796  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.765855  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263
	I0520 12:40:02.765863  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.765872  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.765880  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.769921  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:02.770920  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:02.770939  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.770950  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.770958  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.774265  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:02.774894  874942 pod_ready.go:92] pod "etcd-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:02.774916  874942 pod_ready.go:81] duration metric: took 9.111753ms for pod "etcd-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.774928  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.775003  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:40:02.775014  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.775023  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.775026  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.779947  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:02.780990  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:02.781008  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.781017  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.781025  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.785008  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:02.785537  874942 pod_ready.go:92] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:02.785551  874942 pod_ready.go:81] duration metric: took 10.616344ms for pod "etcd-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.785560  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.931878  874942 request.go:629] Waited for 146.224222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:02.931940  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:02.931949  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.931960  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.931970  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.935466  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:03.132477  874942 request.go:629] Waited for 196.395923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.132541  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.132546  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.132554  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.132561  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.135715  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:03.331966  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:03.331990  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.331999  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.332006  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.335246  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:03.532442  874942 request.go:629] Waited for 196.411511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.532542  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.532553  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.532561  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.532568  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.536617  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:03.786367  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:03.786393  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.786401  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.786406  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.789859  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:03.932022  874942 request.go:629] Waited for 141.329959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.932099  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.932106  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.932116  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.932125  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.935494  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.286317  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:04.286341  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:04.286349  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:04.286354  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:04.290198  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.332286  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:04.332311  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:04.332322  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:04.332326  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:04.335814  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.786334  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:04.786358  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:04.786366  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:04.786371  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:04.789630  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.790424  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:04.790439  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:04.790447  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:04.790452  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:04.793731  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.794308  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:05.286800  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:05.286824  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:05.286835  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:05.286840  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:05.289844  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:05.290904  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:05.290919  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:05.290929  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:05.290935  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:05.293763  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:05.786624  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:05.786649  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:05.786659  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:05.786665  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:05.790724  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:05.791844  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:05.791860  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:05.791870  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:05.791875  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:05.795129  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:06.286239  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:06.286267  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:06.286275  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:06.286282  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:06.290346  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:06.292015  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:06.292035  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:06.292045  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:06.292050  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:06.294742  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:06.785780  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:06.785807  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:06.785814  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:06.785818  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:06.789096  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:06.789828  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:06.789846  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:06.789854  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:06.789860  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:06.792646  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:07.285744  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:07.285773  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:07.285784  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:07.285790  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:07.288771  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:07.289425  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:07.289444  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:07.289452  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:07.289455  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:07.292222  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:07.292904  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:07.786652  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:07.786674  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:07.786682  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:07.786687  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:07.790245  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:07.791028  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:07.791045  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:07.791050  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:07.791053  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:07.793963  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:08.286253  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:08.286284  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:08.286294  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:08.286301  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:08.289986  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:08.290909  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:08.290928  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:08.290942  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:08.290950  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:08.294015  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:08.786612  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:08.786645  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:08.786659  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:08.786664  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:08.790127  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:08.790866  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:08.790885  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:08.790896  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:08.790903  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:08.794534  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:09.286643  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:09.286666  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:09.286674  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:09.286677  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:09.289876  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:09.290439  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:09.290453  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:09.290461  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:09.290467  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:09.293570  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:09.294097  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:09.785903  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:09.785931  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:09.785943  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:09.785951  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:09.789125  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:09.789798  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:09.789815  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:09.789826  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:09.789831  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:09.792347  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:10.286716  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:10.286742  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:10.286748  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:10.286752  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:10.291643  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:10.292601  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:10.292618  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:10.292630  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:10.292637  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:10.295979  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:10.786472  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:10.786494  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:10.786503  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:10.786507  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:10.789576  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:10.790460  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:10.790478  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:10.790486  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:10.790492  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:10.793941  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:11.286457  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:11.286477  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:11.286486  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:11.286490  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:11.289966  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:11.291052  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:11.291118  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:11.291137  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:11.291143  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:11.294395  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:11.295277  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:11.785884  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:11.785911  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:11.785925  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:11.785934  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:11.789826  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:11.790885  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:11.790899  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:11.790907  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:11.790910  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:11.794609  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:12.286594  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:12.286616  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:12.286625  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:12.286630  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:12.290795  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:12.291822  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:12.291851  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:12.291861  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:12.291875  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:12.295092  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:12.786754  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:12.786780  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:12.786791  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:12.786796  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:12.789985  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:12.790797  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:12.790813  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:12.790820  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:12.790824  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:12.793863  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:13.286184  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:13.286209  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:13.286218  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:13.286222  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:13.289266  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:13.290009  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:13.290024  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:13.290032  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:13.290036  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:13.292839  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:13.786046  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:13.786069  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:13.786078  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:13.786081  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:13.790433  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:13.791704  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:13.791725  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:13.791741  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:13.791748  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:13.794517  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:13.795187  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:14.286086  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:14.286107  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.286115  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.286119  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.288995  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.289953  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:14.289970  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.289979  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.289984  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.292808  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.786110  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:14.786134  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.786141  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.786147  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.789532  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:14.790246  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:14.790262  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.790270  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.790274  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.793033  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.793542  874942 pod_ready.go:92] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.793565  874942 pod_ready.go:81] duration metric: took 12.007998033s for pod "etcd-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.793588  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.793671  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263
	I0520 12:40:14.793685  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.793695  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.793700  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.795964  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.796706  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:14.796724  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.796732  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.796737  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.798788  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.799307  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.799328  874942 pod_ready.go:81] duration metric: took 5.730111ms for pod "kube-apiserver-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.799340  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.799401  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263-m02
	I0520 12:40:14.799409  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.799418  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.799425  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.801975  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.802506  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:14.802519  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.802525  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.802528  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.804535  874942 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 12:40:14.805043  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.805058  874942 pod_ready.go:81] duration metric: took 5.710651ms for pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.805066  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.805116  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263-m03
	I0520 12:40:14.805124  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.805130  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.805135  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.807349  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.808000  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:14.808016  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.808026  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.808031  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.810398  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.810900  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.810922  874942 pod_ready.go:81] duration metric: took 5.849942ms for pod "kube-apiserver-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.810933  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.810990  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263
	I0520 12:40:14.811000  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.811010  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.811018  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.813091  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.813524  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:14.813538  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.813545  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.813549  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.815403  874942 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 12:40:14.815740  874942 pod_ready.go:92] pod "kube-controller-manager-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.815754  874942 pod_ready.go:81] duration metric: took 4.814235ms for pod "kube-controller-manager-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.815763  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.987195  874942 request.go:629] Waited for 171.343784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263-m02
	I0520 12:40:14.987256  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263-m02
	I0520 12:40:14.987263  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.987271  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.987277  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.990306  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.186567  874942 request.go:629] Waited for 195.376606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:15.186643  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:15.186651  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.186666  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.186674  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.190350  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.190909  874942 pod_ready.go:92] pod "kube-controller-manager-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:15.190933  874942 pod_ready.go:81] duration metric: took 375.159925ms for pod "kube-controller-manager-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.190951  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.387116  874942 request.go:629] Waited for 196.056387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263-m03
	I0520 12:40:15.387214  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263-m03
	I0520 12:40:15.387234  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.387244  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.387260  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.390370  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.586350  874942 request.go:629] Waited for 194.939417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:15.586427  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:15.586432  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.586440  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.586447  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.589805  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.590499  874942 pod_ready.go:92] pod "kube-controller-manager-ha-252263-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:15.590517  874942 pod_ready.go:81] duration metric: took 399.555096ms for pod "kube-controller-manager-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.590529  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-84x7f" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.786923  874942 request.go:629] Waited for 196.315135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84x7f
	I0520 12:40:15.787012  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84x7f
	I0520 12:40:15.787046  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.787062  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.787074  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.790375  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.986220  874942 request.go:629] Waited for 195.226495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:15.986309  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:15.986318  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.986325  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.986330  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.989485  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.990321  874942 pod_ready.go:92] pod "kube-proxy-84x7f" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:15.990340  874942 pod_ready.go:81] duration metric: took 399.802434ms for pod "kube-proxy-84x7f" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.990350  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c8zs5" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.186454  874942 request.go:629] Waited for 196.021403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c8zs5
	I0520 12:40:16.186542  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c8zs5
	I0520 12:40:16.186561  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.186588  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.186598  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.189804  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:16.386784  874942 request.go:629] Waited for 196.311388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:16.386870  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:16.386878  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.386888  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.386895  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.390239  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:16.390889  874942 pod_ready.go:92] pod "kube-proxy-c8zs5" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:16.390911  874942 pod_ready.go:81] duration metric: took 400.553474ms for pod "kube-proxy-c8zs5" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.390923  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5zvt" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.587027  874942 request.go:629] Waited for 196.000061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5zvt
	I0520 12:40:16.587091  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5zvt
	I0520 12:40:16.587095  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.587104  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.587115  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.590184  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:16.786264  874942 request.go:629] Waited for 195.288041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:16.786329  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:16.786336  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.786347  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.786356  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.790169  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:16.790677  874942 pod_ready.go:92] pod "kube-proxy-z5zvt" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:16.790698  874942 pod_ready.go:81] duration metric: took 399.767609ms for pod "kube-proxy-z5zvt" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.790708  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.986973  874942 request.go:629] Waited for 196.161345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263
	I0520 12:40:16.987053  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263
	I0520 12:40:16.987070  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.987081  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.987086  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.990504  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.186800  874942 request.go:629] Waited for 195.321566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:17.186903  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:17.186911  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.186922  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.186930  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.190016  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.190633  874942 pod_ready.go:92] pod "kube-scheduler-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:17.190658  874942 pod_ready.go:81] duration metric: took 399.940903ms for pod "kube-scheduler-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.190673  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.386723  874942 request.go:629] Waited for 195.940912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263-m02
	I0520 12:40:17.386787  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263-m02
	I0520 12:40:17.386792  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.386800  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.386805  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.390225  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.586762  874942 request.go:629] Waited for 195.433055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:17.586852  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:17.586857  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.586865  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.586871  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.590114  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.590779  874942 pod_ready.go:92] pod "kube-scheduler-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:17.590803  874942 pod_ready.go:81] duration metric: took 400.117772ms for pod "kube-scheduler-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.590815  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.786573  874942 request.go:629] Waited for 195.642396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263-m03
	I0520 12:40:17.786673  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263-m03
	I0520 12:40:17.786683  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.786694  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.786703  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.789794  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.987072  874942 request.go:629] Waited for 196.422346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:17.987141  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:17.987146  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.987154  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.987160  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.990724  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.991355  874942 pod_ready.go:92] pod "kube-scheduler-ha-252263-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:17.991379  874942 pod_ready.go:81] duration metric: took 400.554642ms for pod "kube-scheduler-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.991393  874942 pod_ready.go:38] duration metric: took 15.255524587s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:40:17.991412  874942 api_server.go:52] waiting for apiserver process to appear ...
	I0520 12:40:17.991482  874942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:40:18.011407  874942 api_server.go:72] duration metric: took 18.651540784s to wait for apiserver process to appear ...
	I0520 12:40:18.011432  874942 api_server.go:88] waiting for apiserver healthz status ...
	I0520 12:40:18.011456  874942 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0520 12:40:18.019993  874942 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0520 12:40:18.020061  874942 round_trippers.go:463] GET https://192.168.39.182:8443/version
	I0520 12:40:18.020067  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.020079  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.020087  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.021263  874942 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 12:40:18.021325  874942 api_server.go:141] control plane version: v1.30.1
	I0520 12:40:18.021341  874942 api_server.go:131] duration metric: took 9.901753ms to wait for apiserver health ...
	I0520 12:40:18.021355  874942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 12:40:18.186656  874942 request.go:629] Waited for 165.194872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:18.186739  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:18.186757  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.186770  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.186776  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.193718  874942 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 12:40:18.199912  874942 system_pods.go:59] 24 kube-system pods found
	I0520 12:40:18.199936  874942 system_pods.go:61] "coredns-7db6d8ff4d-96h5w" [3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf] Running
	I0520 12:40:18.199940  874942 system_pods.go:61] "coredns-7db6d8ff4d-c2vkj" [a5fa83f0-abaa-4c78-8d08-124503934fb1] Running
	I0520 12:40:18.199944  874942 system_pods.go:61] "etcd-ha-252263" [d5d0140d-3bf7-4b3f-9a11-b275e9800f1d] Running
	I0520 12:40:18.199947  874942 system_pods.go:61] "etcd-ha-252263-m02" [1a626412-42d2-478b-9ebf-891abf9e9a5a] Running
	I0520 12:40:18.199950  874942 system_pods.go:61] "etcd-ha-252263-m03" [76500ab4-ce7c-43b9-868b-f46f90fc54c4] Running
	I0520 12:40:18.199953  874942 system_pods.go:61] "kindnet-8vkjc" [b222e7ad-6005-42bf-867f-40b94d584782] Running
	I0520 12:40:18.199956  874942 system_pods.go:61] "kindnet-d67g2" [a66b7178-4b9d-4958-898b-37ff6350432a] Running
	I0520 12:40:18.199958  874942 system_pods.go:61] "kindnet-lfz72" [dcfb2815-bac5-46fd-b65e-6fa4cbc748be] Running
	I0520 12:40:18.199961  874942 system_pods.go:61] "kube-apiserver-ha-252263" [69e7f726-e571-41dd-a16e-10f4b495d230] Running
	I0520 12:40:18.199965  874942 system_pods.go:61] "kube-apiserver-ha-252263-m02" [6cecadf0-4518-4744-aa2b-81a27c1cfb0d] Running
	I0520 12:40:18.199969  874942 system_pods.go:61] "kube-apiserver-ha-252263-m03" [7f48b761-0d1e-48f3-8281-27a491a2a4b2] Running
	I0520 12:40:18.199972  874942 system_pods.go:61] "kube-controller-manager-ha-252263" [51976a74-4436-45cc-9192-6d0af34f30b0] Running
	I0520 12:40:18.199978  874942 system_pods.go:61] "kube-controller-manager-ha-252263-m02" [72556438-654e-4070-ad00-d3e737db68dd] Running
	I0520 12:40:18.199983  874942 system_pods.go:61] "kube-controller-manager-ha-252263-m03" [09306613-e277-4460-9e5a-0b52e864207e] Running
	I0520 12:40:18.199988  874942 system_pods.go:61] "kube-proxy-84x7f" [af9df182-185d-479e-abf7-7bcb3709d039] Running
	I0520 12:40:18.199991  874942 system_pods.go:61] "kube-proxy-c8zs5" [0a2ddd4c-b435-4bd5-9a31-16f8ea676656] Running
	I0520 12:40:18.199997  874942 system_pods.go:61] "kube-proxy-z5zvt" [fd9f5f1f-60ac-4567-8d5c-b2de0404623f] Running
	I0520 12:40:18.200000  874942 system_pods.go:61] "kube-scheduler-ha-252263" [a6b8dabc-a8a1-46b3-ae41-ecb026648fe3] Running
	I0520 12:40:18.200003  874942 system_pods.go:61] "kube-scheduler-ha-252263-m02" [bafebb09-b0c8-481f-8808-d4396c2b28cb] Running
	I0520 12:40:18.200006  874942 system_pods.go:61] "kube-scheduler-ha-252263-m03" [feb4de60-8201-433b-9ac4-bf0e28dac337] Running
	I0520 12:40:18.200010  874942 system_pods.go:61] "kube-vip-ha-252263" [6e5827b4-5a1c-4523-9282-8c901ab68b5a] Running
	I0520 12:40:18.200013  874942 system_pods.go:61] "kube-vip-ha-252263-m02" [d33ac9fa-d81e-4676-a735-76f6709c3695] Running
	I0520 12:40:18.200015  874942 system_pods.go:61] "kube-vip-ha-252263-m03" [52e2d893-a58f-4e3d-83d9-208bd7f3b04f] Running
	I0520 12:40:18.200018  874942 system_pods.go:61] "storage-provisioner" [5db18dbf-710f-4c10-84bb-c5120c865740] Running
	I0520 12:40:18.200022  874942 system_pods.go:74] duration metric: took 178.659158ms to wait for pod list to return data ...
	I0520 12:40:18.200030  874942 default_sa.go:34] waiting for default service account to be created ...
	I0520 12:40:18.386458  874942 request.go:629] Waited for 186.350768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/default/serviceaccounts
	I0520 12:40:18.386519  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/default/serviceaccounts
	I0520 12:40:18.386523  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.386531  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.386534  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.390190  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:18.390324  874942 default_sa.go:45] found service account: "default"
	I0520 12:40:18.390345  874942 default_sa.go:55] duration metric: took 190.306583ms for default service account to be created ...
	I0520 12:40:18.390356  874942 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 12:40:18.587000  874942 request.go:629] Waited for 196.552739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:18.587066  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:18.587071  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.587080  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.587083  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.593251  874942 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 12:40:18.600444  874942 system_pods.go:86] 24 kube-system pods found
	I0520 12:40:18.600472  874942 system_pods.go:89] "coredns-7db6d8ff4d-96h5w" [3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf] Running
	I0520 12:40:18.600478  874942 system_pods.go:89] "coredns-7db6d8ff4d-c2vkj" [a5fa83f0-abaa-4c78-8d08-124503934fb1] Running
	I0520 12:40:18.600483  874942 system_pods.go:89] "etcd-ha-252263" [d5d0140d-3bf7-4b3f-9a11-b275e9800f1d] Running
	I0520 12:40:18.600487  874942 system_pods.go:89] "etcd-ha-252263-m02" [1a626412-42d2-478b-9ebf-891abf9e9a5a] Running
	I0520 12:40:18.600491  874942 system_pods.go:89] "etcd-ha-252263-m03" [76500ab4-ce7c-43b9-868b-f46f90fc54c4] Running
	I0520 12:40:18.600494  874942 system_pods.go:89] "kindnet-8vkjc" [b222e7ad-6005-42bf-867f-40b94d584782] Running
	I0520 12:40:18.600499  874942 system_pods.go:89] "kindnet-d67g2" [a66b7178-4b9d-4958-898b-37ff6350432a] Running
	I0520 12:40:18.600503  874942 system_pods.go:89] "kindnet-lfz72" [dcfb2815-bac5-46fd-b65e-6fa4cbc748be] Running
	I0520 12:40:18.600507  874942 system_pods.go:89] "kube-apiserver-ha-252263" [69e7f726-e571-41dd-a16e-10f4b495d230] Running
	I0520 12:40:18.600511  874942 system_pods.go:89] "kube-apiserver-ha-252263-m02" [6cecadf0-4518-4744-aa2b-81a27c1cfb0d] Running
	I0520 12:40:18.600518  874942 system_pods.go:89] "kube-apiserver-ha-252263-m03" [7f48b761-0d1e-48f3-8281-27a491a2a4b2] Running
	I0520 12:40:18.600522  874942 system_pods.go:89] "kube-controller-manager-ha-252263" [51976a74-4436-45cc-9192-6d0af34f30b0] Running
	I0520 12:40:18.600530  874942 system_pods.go:89] "kube-controller-manager-ha-252263-m02" [72556438-654e-4070-ad00-d3e737db68dd] Running
	I0520 12:40:18.600533  874942 system_pods.go:89] "kube-controller-manager-ha-252263-m03" [09306613-e277-4460-9e5a-0b52e864207e] Running
	I0520 12:40:18.600537  874942 system_pods.go:89] "kube-proxy-84x7f" [af9df182-185d-479e-abf7-7bcb3709d039] Running
	I0520 12:40:18.600541  874942 system_pods.go:89] "kube-proxy-c8zs5" [0a2ddd4c-b435-4bd5-9a31-16f8ea676656] Running
	I0520 12:40:18.600546  874942 system_pods.go:89] "kube-proxy-z5zvt" [fd9f5f1f-60ac-4567-8d5c-b2de0404623f] Running
	I0520 12:40:18.600550  874942 system_pods.go:89] "kube-scheduler-ha-252263" [a6b8dabc-a8a1-46b3-ae41-ecb026648fe3] Running
	I0520 12:40:18.600554  874942 system_pods.go:89] "kube-scheduler-ha-252263-m02" [bafebb09-b0c8-481f-8808-d4396c2b28cb] Running
	I0520 12:40:18.600560  874942 system_pods.go:89] "kube-scheduler-ha-252263-m03" [feb4de60-8201-433b-9ac4-bf0e28dac337] Running
	I0520 12:40:18.600564  874942 system_pods.go:89] "kube-vip-ha-252263" [6e5827b4-5a1c-4523-9282-8c901ab68b5a] Running
	I0520 12:40:18.600570  874942 system_pods.go:89] "kube-vip-ha-252263-m02" [d33ac9fa-d81e-4676-a735-76f6709c3695] Running
	I0520 12:40:18.600573  874942 system_pods.go:89] "kube-vip-ha-252263-m03" [52e2d893-a58f-4e3d-83d9-208bd7f3b04f] Running
	I0520 12:40:18.600577  874942 system_pods.go:89] "storage-provisioner" [5db18dbf-710f-4c10-84bb-c5120c865740] Running
	I0520 12:40:18.600582  874942 system_pods.go:126] duration metric: took 210.217723ms to wait for k8s-apps to be running ...
	I0520 12:40:18.600592  874942 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 12:40:18.600645  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:40:18.615703  874942 system_svc.go:56] duration metric: took 15.100667ms WaitForService to wait for kubelet
	I0520 12:40:18.615728  874942 kubeadm.go:576] duration metric: took 19.255867278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:40:18.615747  874942 node_conditions.go:102] verifying NodePressure condition ...
	I0520 12:40:18.787128  874942 request.go:629] Waited for 171.293819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes
	I0520 12:40:18.787201  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes
	I0520 12:40:18.787207  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.787221  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.787231  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.791588  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:18.792796  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:40:18.792819  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:40:18.792830  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:40:18.792833  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:40:18.792836  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:40:18.792839  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:40:18.792843  874942 node_conditions.go:105] duration metric: took 177.092352ms to run NodePressure ...
	I0520 12:40:18.792855  874942 start.go:240] waiting for startup goroutines ...
	I0520 12:40:18.792875  874942 start.go:254] writing updated cluster config ...
	I0520 12:40:18.793237  874942 ssh_runner.go:195] Run: rm -f paused
	I0520 12:40:18.844454  874942 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 12:40:18.846614  874942 out.go:177] * Done! kubectl is now configured to use "ha-252263" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.478557141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209024478528897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f49292d-b84d-4e8a-ba28-6b3fef8ee259 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.480941088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8488e102-7596-425f-bd20-33cdbb6a900e name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.481006228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8488e102-7596-425f-bd20-33cdbb6a900e name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.481399578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716208821244579996,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674333320289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674327059972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353,PodSandboxId:509e3f4d08fedc3173c18c0b94ea58929a76174d6dc95a04aefbeb74e9507e75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1716208674232176169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0,PodSandboxId:f86d5e1365cb832e0d1cc4b6bfa804f62095e06c4733bbba19ec38fb00ee97c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162086
72416030703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716208672039062123,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9,PodSandboxId:f80807f22bebc862472eb7c843cb9f208163edfe0c2103750f8f204deaf5e4f4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716208653565978132,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f00725a825c4f7424b73b648375ccaa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf,PodSandboxId:73772985d8fcca40bcbcd3e2f6305e797a90ce024f00fea03e18907ca318c200,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716208651879196138,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716208651871576671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257,PodSandboxId:530a8699d490ca93f93328170e233c12e11f2b6a5f9898775c9181c5d229518a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716208651847301509,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernete
s.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716208651760986831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8488e102-7596-425f-bd20-33cdbb6a900e name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.533573823Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cbb7c468-df3a-40b2-9e69-ad024566bcb3 name=/runtime.v1.RuntimeService/Version
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.533665393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbb7c468-df3a-40b2-9e69-ad024566bcb3 name=/runtime.v1.RuntimeService/Version
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.534702780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3de40434-78f7-4266-bf38-2afbc46d2615 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.535256575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209024535230272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3de40434-78f7-4266-bf38-2afbc46d2615 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.535851905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ea2a759-c178-4a2d-846d-4e1dc96c7069 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.535965190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ea2a759-c178-4a2d-846d-4e1dc96c7069 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.536233588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716208821244579996,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674333320289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674327059972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353,PodSandboxId:509e3f4d08fedc3173c18c0b94ea58929a76174d6dc95a04aefbeb74e9507e75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1716208674232176169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0,PodSandboxId:f86d5e1365cb832e0d1cc4b6bfa804f62095e06c4733bbba19ec38fb00ee97c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162086
72416030703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716208672039062123,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9,PodSandboxId:f80807f22bebc862472eb7c843cb9f208163edfe0c2103750f8f204deaf5e4f4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716208653565978132,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f00725a825c4f7424b73b648375ccaa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf,PodSandboxId:73772985d8fcca40bcbcd3e2f6305e797a90ce024f00fea03e18907ca318c200,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716208651879196138,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716208651871576671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257,PodSandboxId:530a8699d490ca93f93328170e233c12e11f2b6a5f9898775c9181c5d229518a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716208651847301509,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernete
s.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716208651760986831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ea2a759-c178-4a2d-846d-4e1dc96c7069 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.578475744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93db3004-b6a8-4e0d-b148-d6779e8dbef9 name=/runtime.v1.RuntimeService/Version
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.578572445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93db3004-b6a8-4e0d-b148-d6779e8dbef9 name=/runtime.v1.RuntimeService/Version
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.579833209Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d974a4a-b214-4dbd-bd45-7f017ebfaff5 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.580379605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209024580353561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d974a4a-b214-4dbd-bd45-7f017ebfaff5 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.581077272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73894340-d0a0-4b03-af83-ffa206a1535f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.581142905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73894340-d0a0-4b03-af83-ffa206a1535f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.581358215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716208821244579996,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674333320289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674327059972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353,PodSandboxId:509e3f4d08fedc3173c18c0b94ea58929a76174d6dc95a04aefbeb74e9507e75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1716208674232176169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0,PodSandboxId:f86d5e1365cb832e0d1cc4b6bfa804f62095e06c4733bbba19ec38fb00ee97c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162086
72416030703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716208672039062123,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9,PodSandboxId:f80807f22bebc862472eb7c843cb9f208163edfe0c2103750f8f204deaf5e4f4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716208653565978132,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f00725a825c4f7424b73b648375ccaa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf,PodSandboxId:73772985d8fcca40bcbcd3e2f6305e797a90ce024f00fea03e18907ca318c200,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716208651879196138,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716208651871576671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257,PodSandboxId:530a8699d490ca93f93328170e233c12e11f2b6a5f9898775c9181c5d229518a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716208651847301509,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernete
s.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716208651760986831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73894340-d0a0-4b03-af83-ffa206a1535f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.621584130Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1801001-00c6-429a-95f9-b86762282af9 name=/runtime.v1.RuntimeService/Version
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.621656711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1801001-00c6-429a-95f9-b86762282af9 name=/runtime.v1.RuntimeService/Version
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.623105339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3467feaf-99d5-4ff8-8bfd-6eb3a9f71ab1 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.623505157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209024623484458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3467feaf-99d5-4ff8-8bfd-6eb3a9f71ab1 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.623976726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21d7d479-9676-44a0-9e84-5f421a5a3fb9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.624032728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21d7d479-9676-44a0-9e84-5f421a5a3fb9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:43:44 ha-252263 crio[680]: time="2024-05-20 12:43:44.624259350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716208821244579996,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674333320289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674327059972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353,PodSandboxId:509e3f4d08fedc3173c18c0b94ea58929a76174d6dc95a04aefbeb74e9507e75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1716208674232176169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0,PodSandboxId:f86d5e1365cb832e0d1cc4b6bfa804f62095e06c4733bbba19ec38fb00ee97c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162086
72416030703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716208672039062123,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9,PodSandboxId:f80807f22bebc862472eb7c843cb9f208163edfe0c2103750f8f204deaf5e4f4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716208653565978132,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f00725a825c4f7424b73b648375ccaa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf,PodSandboxId:73772985d8fcca40bcbcd3e2f6305e797a90ce024f00fea03e18907ca318c200,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716208651879196138,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716208651871576671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257,PodSandboxId:530a8699d490ca93f93328170e233c12e11f2b6a5f9898775c9181c5d229518a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716208651847301509,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernete
s.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716208651760986831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21d7d479-9676-44a0-9e84-5f421a5a3fb9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fb77a13cb639       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e3f7317af104f       busybox-fc5497c4f-vdgxd
	0aaaa2c2d0a2a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   8217c5dc10b50       coredns-7db6d8ff4d-c2vkj
	81df7a9501142       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   43b0b303d8ecf       coredns-7db6d8ff4d-96h5w
	f4931bfff375c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   509e3f4d08fed       storage-provisioner
	0fab498e261e0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               0                   f86d5e1365cb8       kindnet-8vkjc
	8481a0a858b8f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                0                   85f3c6afc77a5       kube-proxy-z5zvt
	8e7cb9bc29277       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   f80807f22bebc       kube-vip-ha-252263
	78352b69293ae       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      6 minutes ago       Running             kube-apiserver            0                   73772985d8fcc       kube-apiserver-ha-252263
	8516a1fdea0a5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      6 minutes ago       Running             kube-scheduler            0                   e9f3670ad0515       kube-scheduler-ha-252263
	38216273b9bc6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      6 minutes ago       Running             kube-controller-manager   0                   530a8699d490c       kube-controller-manager-ha-252263
	57b99e90b3f2c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   9dcb3183f7b71       etcd-ha-252263
	
	
	==> coredns [0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a] <==
	[INFO] 10.244.1.2:39515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230927s
	[INFO] 10.244.1.2:38792 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000064874s
	[INFO] 10.244.1.2:58037 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001770245s
	[INFO] 10.244.0.4:48741 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012283219s
	[INFO] 10.244.0.4:49128 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009738s
	[INFO] 10.244.2.2:33816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646431s
	[INFO] 10.244.2.2:35739 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000262525s
	[INFO] 10.244.2.2:38598 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158046s
	[INFO] 10.244.2.2:58591 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129009s
	[INFO] 10.244.2.2:42154 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077099s
	[INFO] 10.244.1.2:55966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236408s
	[INFO] 10.244.1.2:38116 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165417s
	[INFO] 10.244.1.2:42765 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013421s
	[INFO] 10.244.0.4:43917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087757s
	[INFO] 10.244.2.2:39196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131607s
	[INFO] 10.244.2.2:53256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139178s
	[INFO] 10.244.2.2:51674 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089462s
	[INFO] 10.244.2.2:49072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088789s
	[INFO] 10.244.1.2:56181 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013731s
	[INFO] 10.244.1.2:41238 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121064s
	[INFO] 10.244.0.4:51538 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100171s
	[INFO] 10.244.2.2:59762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112653s
	[INFO] 10.244.2.2:48400 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080614s
	[INFO] 10.244.1.2:54360 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166063s
	[INFO] 10.244.1.2:51350 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071222s
	
	
	==> coredns [81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7] <==
	[INFO] 10.244.0.4:38461 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003816118s
	[INFO] 10.244.0.4:34424 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139726s
	[INFO] 10.244.0.4:60068 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136414s
	[INFO] 10.244.0.4:60267 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175538s
	[INFO] 10.244.0.4:34444 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098358s
	[INFO] 10.244.2.2:57093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167419s
	[INFO] 10.244.2.2:33999 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001382726s
	[INFO] 10.244.2.2:34539 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090296s
	[INFO] 10.244.1.2:60979 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00180355s
	[INFO] 10.244.1.2:60301 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084438s
	[INFO] 10.244.1.2:44989 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001381049s
	[INFO] 10.244.1.2:51684 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008851s
	[INFO] 10.244.1.2:37865 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122394s
	[INFO] 10.244.0.4:41864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103464s
	[INFO] 10.244.0.4:48776 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078784s
	[INFO] 10.244.0.4:50703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060251s
	[INFO] 10.244.1.2:44802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115237s
	[INFO] 10.244.1.2:33948 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012433s
	[INFO] 10.244.0.4:54781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008753s
	[INFO] 10.244.0.4:54168 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243725s
	[INFO] 10.244.0.4:60539 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140289s
	[INFO] 10.244.2.2:37865 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093682s
	[INFO] 10.244.2.2:38339 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116317s
	[INFO] 10.244.1.2:44551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117883s
	[INFO] 10.244.1.2:42004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008187s
	
	
	==> describe nodes <==
	Name:               ha-252263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T12_37_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:43:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:40:42 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:40:42 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:40:42 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:40:42 +0000   Mon, 20 May 2024 12:37:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-252263
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 35935ea8555a4df9a418abd1fd7734ca
	  System UUID:                35935ea8-555a-4df9-a418-abd1fd7734ca
	  Boot ID:                    96326bcd-6af4-4e73-8e52-8d2d55c0ef49
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vdgxd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 coredns-7db6d8ff4d-96h5w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m53s
	  kube-system                 coredns-7db6d8ff4d-c2vkj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m53s
	  kube-system                 etcd-ha-252263                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-8vkjc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m53s
	  kube-system                 kube-apiserver-ha-252263             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-controller-manager-ha-252263    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-proxy-z5zvt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-scheduler-ha-252263             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-vip-ha-252263                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m52s  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node ha-252263 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m7s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m7s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m6s   kubelet          Node ha-252263 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s   kubelet          Node ha-252263 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s   kubelet          Node ha-252263 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m54s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal  NodeReady                5m51s  kubelet          Node ha-252263 status is now: NodeReady
	  Normal  RegisteredNode           4m43s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal  RegisteredNode           3m31s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	
	
	Name:               ha-252263-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_38_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:38:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:41:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 12:40:45 +0000   Mon, 20 May 2024 12:41:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 12:40:45 +0000   Mon, 20 May 2024 12:41:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 12:40:45 +0000   Mon, 20 May 2024 12:41:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 12:40:45 +0000   Mon, 20 May 2024 12:41:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-252263-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 39c8edfb8be441aab0eaa91516d89ad1
	  System UUID:                39c8edfb-8be4-41aa-b0ea-a91516d89ad1
	  Boot ID:                    47fc0a20-7d26-4ae4-84a6-254956052d62
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xqdrj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 etcd-ha-252263-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m
	  kube-system                 kindnet-lfz72                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m2s
	  kube-system                 kube-apiserver-ha-252263-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-controller-manager-ha-252263-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-84x7f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-ha-252263-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-vip-ha-252263-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node ha-252263-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node ha-252263-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node ha-252263-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m59s                node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           4m43s                node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           3m31s                node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  NodeNotReady             106s                 node-controller  Node ha-252263-m02 status is now: NodeNotReady
	
	
	Name:               ha-252263-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_39_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:39:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:43:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:40:25 +0000   Mon, 20 May 2024 12:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:40:25 +0000   Mon, 20 May 2024 12:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:40:25 +0000   Mon, 20 May 2024 12:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:40:25 +0000   Mon, 20 May 2024 12:40:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-252263-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3787355a13534f32abf4729d5f862897
	  System UUID:                3787355a-1353-4f32-abf4-729d5f862897
	  Boot ID:                    68a704e8-f575-4b7b-98a9-d727d451be92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xq6j6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 etcd-ha-252263-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-d67g2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m51s
	  kube-system                 kube-apiserver-ha-252263-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ha-252263-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-c8zs5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-scheduler-ha-252263-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-vip-ha-252263-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node ha-252263-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node ha-252263-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node ha-252263-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal  RegisteredNode           3m32s                  node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	
	
	Name:               ha-252263-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_40_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:40:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:43:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:41:26 +0000   Mon, 20 May 2024 12:40:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:41:26 +0000   Mon, 20 May 2024 12:40:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:41:26 +0000   Mon, 20 May 2024 12:40:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:41:26 +0000   Mon, 20 May 2024 12:41:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-252263-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e01b8d01b7b3442aafbd1460443cc06b
	  System UUID:                e01b8d01-b7b3-442a-afbd-1460443cc06b
	  Boot ID:                    14648e88-4164-483b-8f3c-95db62d2c79a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5st4d       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m50s
	  kube-system                 kube-proxy-gww58    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m50s (x2 over 2m50s)  kubelet          Node ha-252263-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x2 over 2m50s)  kubelet          Node ha-252263-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x2 over 2m50s)  kubelet          Node ha-252263-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal  RegisteredNode           2m45s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-252263-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May20 12:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051150] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040296] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[May20 12:37] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.429532] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.630936] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.720517] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.056941] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063479] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.182637] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.137786] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261133] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.100200] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.178110] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.059165] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.929456] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.070241] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.174384] kauditd_printk_skb: 21 callbacks suppressed
	[May20 12:38] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b] <==
	{"level":"warn","ts":"2024-05-20T12:43:44.902674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.913062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.925517Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.933071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.9388Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.94596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.956644Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.965545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.974558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.978372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.982152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.983726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:44.998013Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.000375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.00575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.009692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.029107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.03712Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.045119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.05743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.0718Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.076497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.085075Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:43:45.09711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:43:45 up 6 min,  0 users,  load average: 0.62, 0.28, 0.13
	Linux ha-252263 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0] <==
	I0520 12:43:13.796325       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:43:23.803217       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:43:23.803329       1 main.go:227] handling current node
	I0520 12:43:23.803361       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:43:23.803378       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:43:23.803711       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:43:23.803821       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:43:23.804098       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:43:23.804129       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:43:33.810740       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:43:33.810862       1 main.go:227] handling current node
	I0520 12:43:33.810886       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:43:33.810956       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:43:33.811145       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:43:33.811185       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:43:33.811288       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:43:33.811360       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:43:43.818343       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:43:43.818509       1 main.go:227] handling current node
	I0520 12:43:43.818553       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:43:43.818580       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:43:43.818752       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:43:43.818784       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:43:43.818864       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:43:43.818963       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf] <==
	I0520 12:37:38.031320       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:37:38.064537       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:37:38.076969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:37:51.151473       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0520 12:37:51.372610       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0520 12:40:22.331370       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34392: use of closed network connection
	E0520 12:40:22.526129       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34410: use of closed network connection
	E0520 12:40:22.730180       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34432: use of closed network connection
	E0520 12:40:22.941054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34448: use of closed network connection
	E0520 12:40:23.117192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34466: use of closed network connection
	E0520 12:40:23.311700       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34480: use of closed network connection
	E0520 12:40:23.487778       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34496: use of closed network connection
	E0520 12:40:23.672103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34522: use of closed network connection
	E0520 12:40:23.843373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55618: use of closed network connection
	E0520 12:40:24.128795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55652: use of closed network connection
	E0520 12:40:24.307485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55666: use of closed network connection
	E0520 12:40:24.510106       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55686: use of closed network connection
	E0520 12:40:24.691589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55704: use of closed network connection
	E0520 12:40:24.875368       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55718: use of closed network connection
	I0520 12:40:58.254401       1 trace.go:236] Trace[229593480]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c35a0634-6027-4012-9c09-76f1c2392ff2,client:192.168.39.41,api-group:,api-version:v1,name:kindnet-mvk7f,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-mvk7f/status,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PATCH (20-May-2024 12:40:57.743) (total time: 510ms):
	Trace[229593480]: ["GuaranteedUpdate etcd3" audit-id:c35a0634-6027-4012-9c09-76f1c2392ff2,key:/pods/kube-system/kindnet-mvk7f,type:*core.Pod,resource:pods 510ms (12:40:57.743)
	Trace[229593480]:  ---"Txn call completed" 501ms (12:40:58.253)]
	Trace[229593480]: ---"Object stored in database" 502ms (12:40:58.254)
	Trace[229593480]: [510.739074ms] [510.739074ms] END
	W0520 12:41:36.827204       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.60]
	
	
	==> kube-controller-manager [38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257] <==
	I0520 12:38:42.834013       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-252263-m02" podCIDRs=["10.244.1.0/24"]
	I0520 12:38:45.472654       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-252263-m02"
	I0520 12:39:54.524099       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-252263-m03\" does not exist"
	I0520 12:39:54.541790       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-252263-m03" podCIDRs=["10.244.2.0/24"]
	I0520 12:39:55.500611       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-252263-m03"
	I0520 12:40:19.785836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.376263ms"
	I0520 12:40:19.809009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.105371ms"
	I0520 12:40:19.935707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.925972ms"
	I0520 12:40:20.069851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.273115ms"
	I0520 12:40:20.091563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.614626ms"
	I0520 12:40:20.091659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.374µs"
	I0520 12:40:21.600932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.072792ms"
	I0520 12:40:21.601384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="195.493µs"
	I0520 12:40:21.654564       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.506086ms"
	I0520 12:40:21.654663       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.425µs"
	I0520 12:40:21.830316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.870985ms"
	I0520 12:40:21.830724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.665µs"
	E0520 12:40:55.382232       1 certificate_controller.go:146] Sync csr-n29hv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-n29hv": the object has been modified; please apply your changes to the latest version and try again
	I0520 12:40:55.653169       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-252263-m04\" does not exist"
	I0520 12:40:55.697169       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-252263-m04" podCIDRs=["10.244.3.0/24"]
	I0520 12:41:00.542594       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-252263-m04"
	I0520 12:41:04.319859       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-252263-m04"
	I0520 12:41:58.868588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-252263-m04"
	I0520 12:41:59.118680       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.017617ms"
	I0520 12:41:59.118794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.429µs"
	
	
	==> kube-proxy [8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e] <==
	I0520 12:37:52.284889       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:37:52.315219       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.182"]
	I0520 12:37:52.419934       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:37:52.419972       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:37:52.419990       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:37:52.428614       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:37:52.428936       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:37:52.428984       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:37:52.430685       1 config.go:192] "Starting service config controller"
	I0520 12:37:52.430728       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:37:52.430756       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:37:52.430776       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:37:52.431144       1 config.go:319] "Starting node config controller"
	I0520 12:37:52.431170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:37:52.531313       1 shared_informer.go:320] Caches are synced for node config
	I0520 12:37:52.531363       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:37:52.531436       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290] <==
	W0520 12:37:36.327547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:37:36.327607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:37:36.412567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:37:36.412614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0520 12:37:38.071521       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 12:39:54.603992       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-d67g2\": pod kindnet-d67g2 is already assigned to node \"ha-252263-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-d67g2" node="ha-252263-m03"
	E0520 12:39:54.604172       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a66b7178-4b9d-4958-898b-37ff6350432a(kube-system/kindnet-d67g2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-d67g2"
	E0520 12:39:54.604251       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-d67g2\": pod kindnet-d67g2 is already assigned to node \"ha-252263-m03\"" pod="kube-system/kindnet-d67g2"
	I0520 12:39:54.604321       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-d67g2" node="ha-252263-m03"
	E0520 12:39:54.603992       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-c8zs5\": pod kube-proxy-c8zs5 is already assigned to node \"ha-252263-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-c8zs5" node="ha-252263-m03"
	E0520 12:39:54.607016       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0a2ddd4c-b435-4bd5-9a31-16f8ea676656(kube-system/kube-proxy-c8zs5) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-c8zs5"
	E0520 12:39:54.607037       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-c8zs5\": pod kube-proxy-c8zs5 is already assigned to node \"ha-252263-m03\"" pod="kube-system/kube-proxy-c8zs5"
	I0520 12:39:54.607168       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-c8zs5" node="ha-252263-m03"
	E0520 12:40:19.800258       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vdgxd\": pod busybox-fc5497c4f-vdgxd is already assigned to node \"ha-252263\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-vdgxd" node="ha-252263"
	E0520 12:40:19.800346       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 57097c7d-bdee-48f4-8736-264f6cfaee92(default/busybox-fc5497c4f-vdgxd) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-vdgxd"
	E0520 12:40:19.800369       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vdgxd\": pod busybox-fc5497c4f-vdgxd is already assigned to node \"ha-252263\"" pod="default/busybox-fc5497c4f-vdgxd"
	I0520 12:40:19.800390       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-vdgxd" node="ha-252263"
	E0520 12:40:55.749329       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ptnbj\": pod kube-proxy-ptnbj is already assigned to node \"ha-252263-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ptnbj" node="ha-252263-m04"
	E0520 12:40:55.749418       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c6ae22ff-6dcd-43cb-9342-f5348f67d3a3(kube-system/kube-proxy-ptnbj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ptnbj"
	E0520 12:40:55.749435       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ptnbj\": pod kube-proxy-ptnbj is already assigned to node \"ha-252263-m04\"" pod="kube-system/kube-proxy-ptnbj"
	I0520 12:40:55.749459       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ptnbj" node="ha-252263-m04"
	E0520 12:40:55.756695       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l25xs\": pod kindnet-l25xs is already assigned to node \"ha-252263-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-l25xs" node="ha-252263-m04"
	E0520 12:40:55.759149       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0239aeff-36c5-438b-ada6-a3f56a4f5efa(kube-system/kindnet-l25xs) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-l25xs"
	E0520 12:40:55.759239       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l25xs\": pod kindnet-l25xs is already assigned to node \"ha-252263-m04\"" pod="kube-system/kindnet-l25xs"
	I0520 12:40:55.759288       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-l25xs" node="ha-252263-m04"
	
	
	==> kubelet <==
	May 20 12:39:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:39:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:39:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:40:19 ha-252263 kubelet[1370]: I0520 12:40:19.774275    1370 topology_manager.go:215] "Topology Admit Handler" podUID="57097c7d-bdee-48f4-8736-264f6cfaee92" podNamespace="default" podName="busybox-fc5497c4f-vdgxd"
	May 20 12:40:19 ha-252263 kubelet[1370]: I0520 12:40:19.777784    1370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45487\" (UniqueName: \"kubernetes.io/projected/57097c7d-bdee-48f4-8736-264f6cfaee92-kube-api-access-45487\") pod \"busybox-fc5497c4f-vdgxd\" (UID: \"57097c7d-bdee-48f4-8736-264f6cfaee92\") " pod="default/busybox-fc5497c4f-vdgxd"
	May 20 12:40:37 ha-252263 kubelet[1370]: E0520 12:40:37.947632    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:40:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:40:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:40:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:40:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:41:37 ha-252263 kubelet[1370]: E0520 12:41:37.946150    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:41:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:41:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:41:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:41:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:42:37 ha-252263 kubelet[1370]: E0520 12:42:37.946562    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:42:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:42:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:42:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:42:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:43:37 ha-252263 kubelet[1370]: E0520 12:43:37.951104    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:43:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:43:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:43:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:43:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-252263 -n ha-252263
helpers_test.go:261: (dbg) Run:  kubectl --context ha-252263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 3 (3.200304432s)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:43:49.738341  879692 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:43:49.738448  879692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:43:49.738457  879692 out.go:304] Setting ErrFile to fd 2...
	I0520 12:43:49.738461  879692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:43:49.738665  879692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:43:49.738821  879692 out.go:298] Setting JSON to false
	I0520 12:43:49.738877  879692 mustload.go:65] Loading cluster: ha-252263
	I0520 12:43:49.738998  879692 notify.go:220] Checking for updates...
	I0520 12:43:49.739319  879692 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:43:49.739337  879692 status.go:255] checking status of ha-252263 ...
	I0520 12:43:49.739718  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:49.739763  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:49.758878  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I0520 12:43:49.759327  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:49.759873  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:49.759889  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:49.760217  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:49.760421  879692 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:43:49.761824  879692 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:43:49.761842  879692 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:43:49.762135  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:49.762171  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:49.777251  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0520 12:43:49.777589  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:49.778080  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:49.778101  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:49.778388  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:49.778565  879692 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:43:49.781085  879692 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:49.781452  879692 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:43:49.781486  879692 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:49.781584  879692 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:43:49.781933  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:49.781972  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:49.796168  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0520 12:43:49.796617  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:49.797070  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:49.797090  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:49.797405  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:49.797586  879692 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:43:49.797772  879692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:49.797797  879692 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:43:49.800427  879692 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:49.800815  879692 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:43:49.800840  879692 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:49.800969  879692 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:43:49.801147  879692 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:43:49.801293  879692 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:43:49.801420  879692 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:43:49.891625  879692 ssh_runner.go:195] Run: systemctl --version
	I0520 12:43:49.898343  879692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:49.913866  879692 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:43:49.913901  879692 api_server.go:166] Checking apiserver status ...
	I0520 12:43:49.913932  879692 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:43:49.927972  879692 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:43:49.937884  879692 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:43:49.937930  879692 ssh_runner.go:195] Run: ls
	I0520 12:43:49.942386  879692 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:43:49.946410  879692 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:43:49.946434  879692 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:43:49.946444  879692 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:43:49.946461  879692 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:43:49.946756  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:49.946795  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:49.961429  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I0520 12:43:49.961810  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:49.962262  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:49.962284  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:49.962625  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:49.962909  879692 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:43:49.964509  879692 status.go:330] ha-252263-m02 host status = "Running" (err=<nil>)
	I0520 12:43:49.964527  879692 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:43:49.964841  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:49.964875  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:49.979355  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40891
	I0520 12:43:49.979714  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:49.980129  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:49.980147  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:49.980443  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:49.980631  879692 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:43:49.983191  879692 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:49.983603  879692 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:43:49.983638  879692 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:49.983820  879692 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:43:49.984097  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:49.984135  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:49.998081  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0520 12:43:49.998458  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:49.998930  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:49.998953  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:49.999283  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:49.999454  879692 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:43:49.999640  879692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:49.999658  879692 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:43:50.002431  879692 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:50.002937  879692 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:43:50.002957  879692 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:50.003089  879692 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:43:50.003274  879692 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:43:50.003428  879692 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:43:50.003676  879692 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	W0520 12:43:52.539203  879692 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:43:52.539304  879692 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0520 12:43:52.539322  879692 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:43:52.539335  879692 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 12:43:52.539381  879692 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:43:52.539393  879692 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:43:52.539829  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:52.539889  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:52.555573  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0520 12:43:52.556095  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:52.556534  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:52.556555  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:52.556909  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:52.557084  879692 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:43:52.558665  879692 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:43:52.558683  879692 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:43:52.559023  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:52.559068  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:52.573543  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I0520 12:43:52.573951  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:52.574429  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:52.574463  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:52.574779  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:52.575005  879692 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:43:52.577618  879692 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:52.578012  879692 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:43:52.578052  879692 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:52.578212  879692 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:43:52.578593  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:52.578652  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:52.593233  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0520 12:43:52.593697  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:52.594188  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:52.594211  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:52.594573  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:52.594806  879692 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:43:52.595034  879692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:52.595059  879692 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:43:52.597907  879692 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:52.598324  879692 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:43:52.598355  879692 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:52.598464  879692 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:43:52.598652  879692 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:43:52.598802  879692 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:43:52.598986  879692 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:43:52.678888  879692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:52.697163  879692 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:43:52.697195  879692 api_server.go:166] Checking apiserver status ...
	I0520 12:43:52.697235  879692 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:43:52.711234  879692 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:43:52.720905  879692 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:43:52.720962  879692 ssh_runner.go:195] Run: ls
	I0520 12:43:52.724962  879692 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:43:52.731416  879692 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:43:52.731441  879692 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:43:52.731452  879692 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:43:52.731473  879692 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:43:52.731773  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:52.731825  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:52.747436  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0520 12:43:52.747893  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:52.748363  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:52.748383  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:52.748694  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:52.748901  879692 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:43:52.750645  879692 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:43:52.750662  879692 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:43:52.750998  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:52.751040  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:52.765946  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0520 12:43:52.766405  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:52.766927  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:52.766949  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:52.767313  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:52.767499  879692 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:43:52.770255  879692 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:52.770692  879692 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:43:52.770728  879692 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:52.770879  879692 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:43:52.771202  879692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:52.771237  879692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:52.785513  879692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0520 12:43:52.785873  879692 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:52.786307  879692 main.go:141] libmachine: Using API Version  1
	I0520 12:43:52.786328  879692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:52.786628  879692 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:52.786842  879692 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:43:52.787042  879692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:52.787065  879692 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:43:52.789363  879692 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:52.789908  879692 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:43:52.789931  879692 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:52.790060  879692 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:43:52.790222  879692 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:43:52.790421  879692 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:43:52.790589  879692 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:43:52.878905  879692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:52.896208  879692 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
E0520 12:43:54.361734  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 3 (2.424671082s)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:43:53.583599  879793 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:43:53.583731  879793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:43:53.583744  879793 out.go:304] Setting ErrFile to fd 2...
	I0520 12:43:53.583749  879793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:43:53.583932  879793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:43:53.584146  879793 out.go:298] Setting JSON to false
	I0520 12:43:53.584178  879793 mustload.go:65] Loading cluster: ha-252263
	I0520 12:43:53.584277  879793 notify.go:220] Checking for updates...
	I0520 12:43:53.584605  879793 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:43:53.584622  879793 status.go:255] checking status of ha-252263 ...
	I0520 12:43:53.585018  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:53.585107  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:53.603301  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0520 12:43:53.603794  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:53.604380  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:53.604409  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:53.604829  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:53.605075  879793 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:43:53.606704  879793 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:43:53.606732  879793 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:43:53.607082  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:53.607129  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:53.622540  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0520 12:43:53.622872  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:53.623453  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:53.623473  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:53.623765  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:53.623930  879793 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:43:53.626664  879793 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:53.627146  879793 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:43:53.627196  879793 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:53.627443  879793 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:43:53.627842  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:53.627896  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:53.641826  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46195
	I0520 12:43:53.642223  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:53.642649  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:53.642675  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:53.643032  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:53.643210  879793 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:43:53.643427  879793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:53.643457  879793 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:43:53.646145  879793 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:53.646526  879793 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:43:53.646559  879793 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:53.646696  879793 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:43:53.646901  879793 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:43:53.647062  879793 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:43:53.647247  879793 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:43:53.730487  879793 ssh_runner.go:195] Run: systemctl --version
	I0520 12:43:53.736810  879793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:53.751333  879793 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:43:53.751373  879793 api_server.go:166] Checking apiserver status ...
	I0520 12:43:53.751413  879793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:43:53.766257  879793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:43:53.777619  879793 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:43:53.777678  879793 ssh_runner.go:195] Run: ls
	I0520 12:43:53.782236  879793 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:43:53.788096  879793 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:43:53.788123  879793 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:43:53.788136  879793 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:43:53.788167  879793 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:43:53.788537  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:53.788578  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:53.804065  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I0520 12:43:53.804591  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:53.805080  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:53.805102  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:53.805393  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:53.805581  879793 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:43:53.807107  879793 status.go:330] ha-252263-m02 host status = "Running" (err=<nil>)
	I0520 12:43:53.807127  879793 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:43:53.807439  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:53.807489  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:53.822829  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I0520 12:43:53.823340  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:53.823821  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:53.823842  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:53.824153  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:53.824336  879793 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:43:53.827024  879793 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:53.827485  879793 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:43:53.827505  879793 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:53.827674  879793 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:43:53.827954  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:53.827993  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:53.842395  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I0520 12:43:53.842756  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:53.843207  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:53.843229  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:53.843578  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:53.843784  879793 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:43:53.843985  879793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:53.844009  879793 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:43:53.846821  879793 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:53.847278  879793 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:43:53.847308  879793 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:53.847476  879793 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:43:53.847675  879793 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:43:53.847828  879793 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:43:53.847969  879793 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	W0520 12:43:55.611196  879793 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:43:55.611290  879793 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0520 12:43:55.611304  879793 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:43:55.611313  879793 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 12:43:55.611331  879793 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:43:55.611339  879793 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:43:55.611664  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:55.611708  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:55.627784  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0520 12:43:55.628225  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:55.628694  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:55.628715  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:55.629096  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:55.629327  879793 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:43:55.630887  879793 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:43:55.630906  879793 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:43:55.631219  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:55.631257  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:55.646361  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0520 12:43:55.646718  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:55.647177  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:55.647204  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:55.647510  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:55.647692  879793 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:43:55.650469  879793 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:55.650894  879793 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:43:55.650930  879793 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:55.651063  879793 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:43:55.651359  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:55.651403  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:55.665769  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I0520 12:43:55.666249  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:55.666685  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:55.666710  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:55.667073  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:55.667294  879793 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:43:55.667523  879793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:55.667544  879793 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:43:55.669968  879793 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:55.670329  879793 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:43:55.670367  879793 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:43:55.670519  879793 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:43:55.670696  879793 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:43:55.670860  879793 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:43:55.670997  879793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:43:55.750776  879793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:55.766879  879793 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:43:55.766914  879793 api_server.go:166] Checking apiserver status ...
	I0520 12:43:55.766955  879793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:43:55.780993  879793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:43:55.792997  879793 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:43:55.793052  879793 ssh_runner.go:195] Run: ls
	I0520 12:43:55.797603  879793 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:43:55.804254  879793 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:43:55.804275  879793 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:43:55.804283  879793 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:43:55.804298  879793 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:43:55.804596  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:55.804642  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:55.820022  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0520 12:43:55.820449  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:55.820918  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:55.820940  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:55.821275  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:55.821502  879793 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:43:55.822999  879793 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:43:55.823017  879793 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:43:55.823319  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:55.823360  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:55.839130  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0520 12:43:55.839602  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:55.840105  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:55.840126  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:55.840444  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:55.840641  879793 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:43:55.843591  879793 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:55.844006  879793 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:43:55.844037  879793 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:55.844204  879793 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:43:55.844532  879793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:55.844568  879793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:55.859785  879793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
	I0520 12:43:55.860182  879793 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:55.860764  879793 main.go:141] libmachine: Using API Version  1
	I0520 12:43:55.860800  879793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:55.861119  879793 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:55.861351  879793 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:43:55.861529  879793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:55.861552  879793 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:43:55.864347  879793 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:55.864729  879793 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:43:55.864754  879793 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:43:55.864904  879793 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:43:55.865060  879793 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:43:55.865221  879793 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:43:55.865375  879793 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:43:55.950325  879793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:55.964211  879793 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 3 (4.536212049s)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:43:57.828239  879894 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:43:57.828528  879894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:43:57.828538  879894 out.go:304] Setting ErrFile to fd 2...
	I0520 12:43:57.828544  879894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:43:57.828719  879894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:43:57.828901  879894 out.go:298] Setting JSON to false
	I0520 12:43:57.828935  879894 mustload.go:65] Loading cluster: ha-252263
	I0520 12:43:57.829042  879894 notify.go:220] Checking for updates...
	I0520 12:43:57.829356  879894 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:43:57.829376  879894 status.go:255] checking status of ha-252263 ...
	I0520 12:43:57.829752  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:57.829822  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:57.848488  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0520 12:43:57.848936  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:57.849594  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:43:57.849628  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:57.849989  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:57.850184  879894 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:43:57.851801  879894 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:43:57.851819  879894 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:43:57.852140  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:57.852184  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:57.866713  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0520 12:43:57.867211  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:57.867688  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:43:57.867713  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:57.867982  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:57.868185  879894 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:43:57.870654  879894 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:57.871075  879894 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:43:57.871110  879894 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:57.871238  879894 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:43:57.871523  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:57.871570  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:57.887618  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42367
	I0520 12:43:57.887972  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:57.888517  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:43:57.888544  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:57.888861  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:57.889097  879894 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:43:57.889266  879894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:57.889293  879894 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:43:57.892100  879894 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:57.892538  879894 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:43:57.892568  879894 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:43:57.892706  879894 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:43:57.892875  879894 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:43:57.893037  879894 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:43:57.893183  879894 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:43:57.974270  879894 ssh_runner.go:195] Run: systemctl --version
	I0520 12:43:57.980534  879894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:43:57.995465  879894 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:43:57.995503  879894 api_server.go:166] Checking apiserver status ...
	I0520 12:43:57.995544  879894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:43:58.010963  879894 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:43:58.026816  879894 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:43:58.026890  879894 ssh_runner.go:195] Run: ls
	I0520 12:43:58.031382  879894 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:43:58.037334  879894 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:43:58.037354  879894 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:43:58.037364  879894 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:43:58.037381  879894 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:43:58.037679  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:58.037740  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:58.053027  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40191
	I0520 12:43:58.053458  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:58.053960  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:43:58.053986  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:58.054314  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:58.054498  879894 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:43:58.056285  879894 status.go:330] ha-252263-m02 host status = "Running" (err=<nil>)
	I0520 12:43:58.056307  879894 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:43:58.056601  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:58.056634  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:58.072198  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0520 12:43:58.072735  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:58.073300  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:43:58.073323  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:58.073642  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:58.073817  879894 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:43:58.076773  879894 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:58.077183  879894 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:43:58.077219  879894 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:58.077351  879894 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:43:58.077642  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:43:58.077675  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:43:58.093044  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42261
	I0520 12:43:58.093505  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:43:58.094032  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:43:58.094051  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:43:58.094320  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:43:58.094510  879894 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:43:58.094721  879894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:43:58.094751  879894 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:43:58.097472  879894 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:58.097958  879894 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:43:58.097994  879894 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:43:58.098104  879894 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:43:58.098249  879894 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:43:58.098412  879894 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:43:58.098557  879894 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	W0520 12:43:58.679116  879894 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:43:58.679167  879894 retry.go:31] will retry after 226.575387ms: dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:44:01.975122  879894 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:44:01.975239  879894 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0520 12:44:01.975263  879894 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:01.975271  879894 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 12:44:01.975290  879894 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:01.975297  879894 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:44:01.975636  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:01.975686  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:01.990923  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I0520 12:44:01.991423  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:01.991907  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:44:01.991938  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:01.992266  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:01.992451  879894 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:44:01.993958  879894 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:44:01.993977  879894 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:01.994352  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:01.994400  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:02.008845  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42669
	I0520 12:44:02.009344  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:02.009886  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:44:02.009914  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:02.010306  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:02.010479  879894 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:44:02.013158  879894 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:02.013595  879894 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:02.013618  879894 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:02.013748  879894 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:02.014084  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:02.014135  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:02.031485  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0520 12:44:02.031934  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:02.032474  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:44:02.032493  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:02.032794  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:02.032991  879894 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:44:02.033163  879894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:02.033183  879894 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:44:02.035970  879894 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:02.036481  879894 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:02.036516  879894 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:02.036672  879894 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:44:02.036847  879894 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:44:02.036985  879894 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:44:02.037113  879894 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:44:02.116840  879894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:02.132537  879894 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:02.132566  879894 api_server.go:166] Checking apiserver status ...
	I0520 12:44:02.132603  879894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:02.146987  879894 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:44:02.156669  879894 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:02.156736  879894 ssh_runner.go:195] Run: ls
	I0520 12:44:02.161104  879894 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:02.165296  879894 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:02.165323  879894 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:44:02.165349  879894 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:02.165369  879894 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:44:02.165738  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:02.165786  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:02.180681  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I0520 12:44:02.181067  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:02.181629  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:44:02.181656  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:02.182018  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:02.182250  879894 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:44:02.183724  879894 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:44:02.183741  879894 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:02.184123  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:02.184169  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:02.200332  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I0520 12:44:02.200716  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:02.201355  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:44:02.201374  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:02.201699  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:02.201920  879894 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:44:02.204968  879894 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:02.205369  879894 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:02.205403  879894 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:02.205527  879894 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:02.205823  879894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:02.205866  879894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:02.220487  879894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37719
	I0520 12:44:02.220836  879894 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:02.221226  879894 main.go:141] libmachine: Using API Version  1
	I0520 12:44:02.221243  879894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:02.221514  879894 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:02.221713  879894 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:44:02.221899  879894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:02.221918  879894 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:44:02.224503  879894 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:02.224886  879894 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:02.224905  879894 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:02.225056  879894 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:44:02.225236  879894 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:44:02.225422  879894 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:44:02.225543  879894 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:44:02.306714  879894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:02.321512  879894 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 3 (3.714335917s)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:44:05.428224  879994 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:44:05.428324  879994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:05.428332  879994 out.go:304] Setting ErrFile to fd 2...
	I0520 12:44:05.428338  879994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:05.428550  879994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:44:05.428761  879994 out.go:298] Setting JSON to false
	I0520 12:44:05.428810  879994 mustload.go:65] Loading cluster: ha-252263
	I0520 12:44:05.428923  879994 notify.go:220] Checking for updates...
	I0520 12:44:05.429288  879994 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:44:05.429310  879994 status.go:255] checking status of ha-252263 ...
	I0520 12:44:05.429791  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:05.429851  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:05.448645  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41371
	I0520 12:44:05.449115  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:05.449765  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:05.449785  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:05.450199  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:05.450476  879994 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:44:05.452148  879994 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:44:05.452168  879994 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:05.452480  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:05.452558  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:05.467599  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0520 12:44:05.468057  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:05.468526  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:05.468547  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:05.468818  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:05.468996  879994 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:44:05.471546  879994 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:05.471985  879994 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:05.472011  879994 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:05.472131  879994 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:05.472465  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:05.472502  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:05.487997  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0520 12:44:05.488358  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:05.488731  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:05.488748  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:05.489116  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:05.489329  879994 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:44:05.489548  879994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:05.489580  879994 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:44:05.492484  879994 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:05.492983  879994 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:05.493004  879994 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:05.493175  879994 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:44:05.493335  879994 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:44:05.493477  879994 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:44:05.493603  879994 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:44:05.578627  879994 ssh_runner.go:195] Run: systemctl --version
	I0520 12:44:05.584831  879994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:05.599737  879994 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:05.599778  879994 api_server.go:166] Checking apiserver status ...
	I0520 12:44:05.599807  879994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:05.617386  879994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:44:05.626875  879994 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:05.626924  879994 ssh_runner.go:195] Run: ls
	I0520 12:44:05.631846  879994 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:05.636273  879994 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:05.636294  879994 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:44:05.636307  879994 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:05.636332  879994 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:44:05.636618  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:05.636658  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:05.652102  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I0520 12:44:05.652575  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:05.653133  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:05.653153  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:05.653521  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:05.653714  879994 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:44:05.655359  879994 status.go:330] ha-252263-m02 host status = "Running" (err=<nil>)
	I0520 12:44:05.655398  879994 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:44:05.655736  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:05.655799  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:05.670418  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I0520 12:44:05.670815  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:05.671323  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:05.671351  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:05.671641  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:05.671862  879994 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:44:05.674649  879994 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:05.675139  879994 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:44:05.675173  879994 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:05.675292  879994 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:44:05.675589  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:05.675639  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:05.689649  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I0520 12:44:05.689983  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:05.690474  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:05.690499  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:05.690821  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:05.691036  879994 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:44:05.691224  879994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:05.691251  879994 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:44:05.693842  879994 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:05.694267  879994 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:44:05.694287  879994 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:05.694425  879994 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:44:05.694595  879994 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:44:05.694740  879994 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:44:05.694887  879994 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	W0520 12:44:08.759079  879994 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:44:08.759189  879994 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0520 12:44:08.759204  879994 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:08.759212  879994 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 12:44:08.759244  879994 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:08.759251  879994 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:44:08.759591  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:08.759639  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:08.775828  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0520 12:44:08.776276  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:08.776773  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:08.776801  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:08.777169  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:08.777359  879994 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:44:08.778977  879994 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:44:08.779000  879994 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:08.779333  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:08.779375  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:08.794403  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0520 12:44:08.794762  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:08.795264  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:08.795293  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:08.795573  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:08.795894  879994 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:44:08.798815  879994 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:08.799289  879994 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:08.799325  879994 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:08.799438  879994 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:08.799775  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:08.799810  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:08.813866  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I0520 12:44:08.814236  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:08.814733  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:08.814750  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:08.815091  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:08.815290  879994 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:44:08.815464  879994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:08.815486  879994 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:44:08.817949  879994 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:08.818398  879994 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:08.818431  879994 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:08.818524  879994 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:44:08.818687  879994 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:44:08.818821  879994 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:44:08.818959  879994 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:44:08.898753  879994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:08.914370  879994 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:08.914399  879994 api_server.go:166] Checking apiserver status ...
	I0520 12:44:08.914434  879994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:08.929165  879994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:44:08.939456  879994 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:08.939503  879994 ssh_runner.go:195] Run: ls
	I0520 12:44:08.943842  879994 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:08.948111  879994 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:08.948130  879994 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:44:08.948139  879994 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:08.948154  879994 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:44:08.948429  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:08.948466  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:08.964068  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44751
	I0520 12:44:08.964544  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:08.965049  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:08.965073  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:08.965394  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:08.965556  879994 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:44:08.967143  879994 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:44:08.967158  879994 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:08.967419  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:08.967455  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:08.981801  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0520 12:44:08.982237  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:08.982668  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:08.982697  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:08.983041  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:08.983241  879994 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:44:08.985816  879994 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:08.986337  879994 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:08.986367  879994 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:08.986524  879994 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:08.986800  879994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:08.986838  879994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:09.001207  879994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0520 12:44:09.001550  879994 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:09.002047  879994 main.go:141] libmachine: Using API Version  1
	I0520 12:44:09.002072  879994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:09.002376  879994 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:09.002596  879994 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:44:09.002771  879994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:09.002789  879994 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:44:09.005440  879994 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:09.005820  879994 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:09.005851  879994 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:09.005950  879994 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:44:09.006145  879994 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:44:09.006310  879994 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:44:09.006420  879994 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:44:09.085994  879994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:09.099696  879994 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 3 (4.249467414s)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:44:11.263907  880111 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:44:11.264158  880111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:11.264167  880111 out.go:304] Setting ErrFile to fd 2...
	I0520 12:44:11.264171  880111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:11.264379  880111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:44:11.264536  880111 out.go:298] Setting JSON to false
	I0520 12:44:11.264563  880111 mustload.go:65] Loading cluster: ha-252263
	I0520 12:44:11.264658  880111 notify.go:220] Checking for updates...
	I0520 12:44:11.264887  880111 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:44:11.264904  880111 status.go:255] checking status of ha-252263 ...
	I0520 12:44:11.265224  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:11.265273  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:11.285355  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0520 12:44:11.285765  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:11.286403  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:11.286426  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:11.286832  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:11.287073  880111 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:44:11.288880  880111 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:44:11.288902  880111 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:11.289186  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:11.289222  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:11.304736  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46195
	I0520 12:44:11.305184  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:11.305675  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:11.305698  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:11.305992  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:11.306184  880111 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:44:11.308755  880111 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:11.309153  880111 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:11.309189  880111 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:11.309260  880111 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:11.309598  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:11.309635  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:11.324958  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I0520 12:44:11.325356  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:11.325773  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:11.325793  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:11.326113  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:11.326274  880111 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:44:11.326454  880111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:11.326476  880111 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:44:11.329116  880111 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:11.329505  880111 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:11.329534  880111 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:11.329651  880111 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:44:11.329840  880111 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:44:11.330002  880111 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:44:11.330119  880111 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:44:11.412264  880111 ssh_runner.go:195] Run: systemctl --version
	I0520 12:44:11.419703  880111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:11.436854  880111 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:11.436906  880111 api_server.go:166] Checking apiserver status ...
	I0520 12:44:11.436952  880111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:11.456623  880111 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:44:11.471015  880111 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:11.471077  880111 ssh_runner.go:195] Run: ls
	I0520 12:44:11.481462  880111 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:11.485978  880111 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:11.486004  880111 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:44:11.486019  880111 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:11.486046  880111 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:44:11.486427  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:11.486485  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:11.503681  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I0520 12:44:11.504176  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:11.504671  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:11.504692  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:11.504993  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:11.505160  880111 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:44:11.506733  880111 status.go:330] ha-252263-m02 host status = "Running" (err=<nil>)
	I0520 12:44:11.506753  880111 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:44:11.507151  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:11.507199  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:11.522622  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0520 12:44:11.523077  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:11.523564  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:11.523586  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:11.523915  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:11.524124  880111 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:44:11.527291  880111 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:11.527754  880111 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:44:11.527791  880111 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:11.527932  880111 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:44:11.528268  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:11.528305  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:11.543448  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0520 12:44:11.543905  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:11.544416  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:11.544441  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:11.544723  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:11.544889  880111 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:44:11.545095  880111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:11.545116  880111 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:44:11.547585  880111 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:11.548003  880111 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:44:11.548030  880111 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:11.548168  880111 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:44:11.548327  880111 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:44:11.548475  880111 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:44:11.548627  880111 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	W0520 12:44:11.831101  880111 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:11.831171  880111 retry.go:31] will retry after 246.40413ms: dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:44:15.127128  880111 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:44:15.127271  880111 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0520 12:44:15.127294  880111 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:15.127306  880111 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 12:44:15.127333  880111 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:15.127346  880111 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:44:15.127682  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:15.127747  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:15.142823  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0520 12:44:15.143334  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:15.143832  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:15.143862  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:15.144182  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:15.144410  880111 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:44:15.145852  880111 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:44:15.145870  880111 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:15.146172  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:15.146205  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:15.162015  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0520 12:44:15.162393  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:15.162810  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:15.162829  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:15.163166  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:15.163370  880111 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:44:15.165941  880111 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:15.166380  880111 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:15.166406  880111 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:15.166552  880111 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:15.166968  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:15.167020  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:15.181594  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42795
	I0520 12:44:15.181969  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:15.182365  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:15.182385  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:15.182725  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:15.182934  880111 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:44:15.183108  880111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:15.183127  880111 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:44:15.185645  880111 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:15.186059  880111 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:15.186088  880111 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:15.186231  880111 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:44:15.186393  880111 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:44:15.186559  880111 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:44:15.186737  880111 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:44:15.261979  880111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:15.276502  880111 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:15.276530  880111 api_server.go:166] Checking apiserver status ...
	I0520 12:44:15.276562  880111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:15.290562  880111 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:44:15.302212  880111 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:15.302279  880111 ssh_runner.go:195] Run: ls
	I0520 12:44:15.307819  880111 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:15.312143  880111 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:15.312162  880111 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:44:15.312171  880111 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:15.312199  880111 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:44:15.312486  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:15.312527  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:15.327399  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
	I0520 12:44:15.327798  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:15.328257  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:15.328278  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:15.328590  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:15.328801  880111 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:44:15.330202  880111 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:44:15.330217  880111 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:15.330492  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:15.330530  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:15.345202  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0520 12:44:15.345617  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:15.346051  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:15.346072  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:15.346403  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:15.346597  880111 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:44:15.349232  880111 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:15.349627  880111 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:15.349646  880111 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:15.349844  880111 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:15.350165  880111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:15.350212  880111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:15.365188  880111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0520 12:44:15.365583  880111 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:15.366090  880111 main.go:141] libmachine: Using API Version  1
	I0520 12:44:15.366114  880111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:15.366484  880111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:15.366705  880111 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:44:15.366921  880111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:15.366948  880111 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:44:15.369542  880111 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:15.369975  880111 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:15.370005  880111 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:15.370186  880111 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:44:15.370365  880111 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:44:15.370482  880111 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:44:15.370648  880111 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:44:15.454879  880111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:15.470534  880111 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 3 (3.739038348s)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:44:20.045433  880211 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:44:20.045708  880211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:20.045719  880211 out.go:304] Setting ErrFile to fd 2...
	I0520 12:44:20.045723  880211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:20.045906  880211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:44:20.046116  880211 out.go:298] Setting JSON to false
	I0520 12:44:20.046152  880211 mustload.go:65] Loading cluster: ha-252263
	I0520 12:44:20.046277  880211 notify.go:220] Checking for updates...
	I0520 12:44:20.046650  880211 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:44:20.046674  880211 status.go:255] checking status of ha-252263 ...
	I0520 12:44:20.047245  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:20.047327  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:20.065470  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0520 12:44:20.065929  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:20.066606  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:20.066648  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:20.067017  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:20.067235  880211 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:44:20.068736  880211 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:44:20.068761  880211 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:20.069020  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:20.069052  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:20.083779  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0520 12:44:20.084312  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:20.084790  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:20.084811  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:20.085139  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:20.085349  880211 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:44:20.088261  880211 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:20.088714  880211 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:20.088746  880211 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:20.088972  880211 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:20.089272  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:20.089316  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:20.104084  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35937
	I0520 12:44:20.104490  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:20.104994  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:20.105017  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:20.105334  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:20.105530  880211 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:44:20.105705  880211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:20.105733  880211 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:44:20.108066  880211 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:20.108470  880211 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:20.108499  880211 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:20.108656  880211 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:44:20.108819  880211 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:44:20.109098  880211 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:44:20.109263  880211 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:44:20.190738  880211 ssh_runner.go:195] Run: systemctl --version
	I0520 12:44:20.197297  880211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:20.213479  880211 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:20.213514  880211 api_server.go:166] Checking apiserver status ...
	I0520 12:44:20.213545  880211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:20.226626  880211 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:44:20.236400  880211 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:20.236454  880211 ssh_runner.go:195] Run: ls
	I0520 12:44:20.241093  880211 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:20.247795  880211 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:20.247817  880211 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:44:20.247831  880211 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:20.247854  880211 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:44:20.248166  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:20.248220  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:20.263207  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0520 12:44:20.263648  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:20.264202  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:20.264232  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:20.264548  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:20.264725  880211 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:44:20.266259  880211 status.go:330] ha-252263-m02 host status = "Running" (err=<nil>)
	I0520 12:44:20.266278  880211 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:44:20.266542  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:20.266575  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:20.283438  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0520 12:44:20.283793  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:20.284277  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:20.284298  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:20.284592  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:20.284823  880211 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:44:20.287540  880211 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:20.287990  880211 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:44:20.288020  880211 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:20.288174  880211 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:44:20.288497  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:20.288532  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:20.302920  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0520 12:44:20.303309  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:20.303777  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:20.303800  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:20.304112  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:20.304380  880211 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:44:20.304571  880211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:20.304595  880211 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:44:20.307256  880211 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:20.307694  880211 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:44:20.307739  880211 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:44:20.307893  880211 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:44:20.308054  880211 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:44:20.308201  880211 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:44:20.308309  880211 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	W0520 12:44:23.387179  880211 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0520 12:44:23.387304  880211 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0520 12:44:23.387327  880211 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:23.387335  880211 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 12:44:23.387355  880211 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0520 12:44:23.387362  880211 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:44:23.387678  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:23.387753  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:23.402764  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44035
	I0520 12:44:23.403300  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:23.403787  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:23.403813  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:23.404210  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:23.404421  880211 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:44:23.406054  880211 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:44:23.406084  880211 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:23.406404  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:23.406446  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:23.422278  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0520 12:44:23.422673  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:23.423224  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:23.423246  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:23.423670  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:23.423909  880211 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:44:23.426869  880211 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:23.427309  880211 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:23.427337  880211 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:23.427470  880211 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:23.427897  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:23.427940  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:23.442483  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42523
	I0520 12:44:23.442860  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:23.443401  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:23.443427  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:23.444126  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:23.444318  880211 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:44:23.444498  880211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:23.444522  880211 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:44:23.447438  880211 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:23.447863  880211 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:23.447890  880211 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:23.448049  880211 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:44:23.448208  880211 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:44:23.448330  880211 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:44:23.448448  880211 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:44:23.528625  880211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:23.550306  880211 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:23.550341  880211 api_server.go:166] Checking apiserver status ...
	I0520 12:44:23.550391  880211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:23.565410  880211 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:44:23.575830  880211 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:23.575892  880211 ssh_runner.go:195] Run: ls
	I0520 12:44:23.580956  880211 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:23.585278  880211 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:23.585299  880211 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:44:23.585307  880211 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:23.585322  880211 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:44:23.585603  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:23.585636  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:23.600429  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0520 12:44:23.600853  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:23.601310  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:23.601334  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:23.601695  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:23.601898  880211 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:44:23.603467  880211 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:44:23.603489  880211 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:23.603820  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:23.603865  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:23.620227  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I0520 12:44:23.620776  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:23.621276  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:23.621301  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:23.621658  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:23.621838  880211 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:44:23.624860  880211 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:23.625372  880211 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:23.625400  880211 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:23.625502  880211 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:23.625787  880211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:23.625822  880211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:23.640440  880211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43953
	I0520 12:44:23.640812  880211 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:23.641251  880211 main.go:141] libmachine: Using API Version  1
	I0520 12:44:23.641272  880211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:23.641584  880211 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:23.641765  880211 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:44:23.641950  880211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:23.641972  880211 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:44:23.644470  880211 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:23.644803  880211 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:23.644831  880211 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:23.644950  880211 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:44:23.645140  880211 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:44:23.645282  880211 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:44:23.645484  880211 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:44:23.726807  880211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:23.740915  880211 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 7 (611.381322ms)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:44:31.507779  880361 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:44:31.508044  880361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:31.508054  880361 out.go:304] Setting ErrFile to fd 2...
	I0520 12:44:31.508058  880361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:31.508241  880361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:44:31.508416  880361 out.go:298] Setting JSON to false
	I0520 12:44:31.508451  880361 mustload.go:65] Loading cluster: ha-252263
	I0520 12:44:31.508501  880361 notify.go:220] Checking for updates...
	I0520 12:44:31.508815  880361 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:44:31.508833  880361 status.go:255] checking status of ha-252263 ...
	I0520 12:44:31.509221  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.509311  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.524787  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0520 12:44:31.525247  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.525875  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.525905  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.526289  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.526505  880361 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:44:31.528258  880361 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:44:31.528280  880361 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:31.528679  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.528735  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.543123  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I0520 12:44:31.543439  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.543854  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.543877  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.544234  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.544431  880361 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:44:31.547288  880361 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:31.547783  880361 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:31.547818  880361 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:31.547941  880361 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:31.548231  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.548270  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.562974  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0520 12:44:31.563324  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.563746  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.563768  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.564077  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.564260  880361 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:44:31.564404  880361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:31.564423  880361 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:44:31.566976  880361 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:31.567522  880361 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:31.567555  880361 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:31.567726  880361 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:44:31.567888  880361 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:44:31.568035  880361 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:44:31.568180  880361 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:44:31.651700  880361 ssh_runner.go:195] Run: systemctl --version
	I0520 12:44:31.658037  880361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:31.676365  880361 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:31.676405  880361 api_server.go:166] Checking apiserver status ...
	I0520 12:44:31.676444  880361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:31.690497  880361 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:44:31.701544  880361 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:31.701598  880361 ssh_runner.go:195] Run: ls
	I0520 12:44:31.707519  880361 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:31.711897  880361 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:31.711921  880361 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:44:31.711932  880361 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:31.711948  880361 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:44:31.712238  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.712272  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.727537  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35889
	I0520 12:44:31.728020  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.728525  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.728552  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.728900  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.729126  880361 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:44:31.730674  880361 status.go:330] ha-252263-m02 host status = "Stopped" (err=<nil>)
	I0520 12:44:31.730690  880361 status.go:343] host is not running, skipping remaining checks
	I0520 12:44:31.730698  880361 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:31.730720  880361 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:44:31.731135  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.731176  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.745650  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0520 12:44:31.746096  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.746554  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.746574  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.746872  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.747076  880361 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:44:31.748533  880361 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:44:31.748553  880361 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:31.748919  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.748961  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.763082  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0520 12:44:31.763450  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.763858  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.763879  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.764171  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.764354  880361 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:44:31.766792  880361 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:31.767266  880361 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:31.767286  880361 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:31.767410  880361 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:31.767718  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.767757  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.781855  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I0520 12:44:31.782225  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.782669  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.782701  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.783088  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.783284  880361 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:44:31.783488  880361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:31.783510  880361 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:44:31.786164  880361 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:31.786543  880361 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:31.786575  880361 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:31.786736  880361 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:44:31.786924  880361 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:44:31.787135  880361 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:44:31.787325  880361 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:44:31.870411  880361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:31.884586  880361 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:31.884616  880361 api_server.go:166] Checking apiserver status ...
	I0520 12:44:31.884655  880361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:31.898348  880361 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:44:31.907638  880361 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:31.907698  880361 ssh_runner.go:195] Run: ls
	I0520 12:44:31.912339  880361 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:31.916789  880361 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:31.916819  880361 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:44:31.916831  880361 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:31.916853  880361 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:44:31.917219  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.917260  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.932539  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0520 12:44:31.933012  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.933460  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.933478  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.933803  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.934013  880361 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:44:31.935610  880361 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:44:31.935631  880361 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:31.935918  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.935951  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.950629  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44315
	I0520 12:44:31.951130  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.951632  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.951656  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.952023  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.952281  880361 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:44:31.955219  880361 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:31.955730  880361 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:31.955771  880361 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:31.955943  880361 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:31.956282  880361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:31.956324  880361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:31.971048  880361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
	I0520 12:44:31.971424  880361 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:31.971950  880361 main.go:141] libmachine: Using API Version  1
	I0520 12:44:31.971978  880361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:31.972361  880361 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:31.972556  880361 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:44:31.972725  880361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:31.972748  880361 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:44:31.975431  880361 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:31.975896  880361 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:31.975922  880361 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:31.976106  880361 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:44:31.976258  880361 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:44:31.976402  880361 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:44:31.976506  880361 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:44:32.058902  880361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:32.074336  880361 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 7 (612.834564ms)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-252263-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:44:45.721260  880465 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:44:45.721508  880465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:45.721517  880465 out.go:304] Setting ErrFile to fd 2...
	I0520 12:44:45.721521  880465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:45.721714  880465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:44:45.721886  880465 out.go:298] Setting JSON to false
	I0520 12:44:45.721913  880465 mustload.go:65] Loading cluster: ha-252263
	I0520 12:44:45.722032  880465 notify.go:220] Checking for updates...
	I0520 12:44:45.722271  880465 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:44:45.722287  880465 status.go:255] checking status of ha-252263 ...
	I0520 12:44:45.722685  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:45.722730  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:45.738239  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41535
	I0520 12:44:45.738761  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:45.739320  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:45.739349  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:45.739797  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:45.740068  880465 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:44:45.741719  880465 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:44:45.741751  880465 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:45.742058  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:45.742102  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:45.757165  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36443
	I0520 12:44:45.757598  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:45.758031  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:45.758050  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:45.758337  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:45.758529  880465 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:44:45.761386  880465 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:45.761866  880465 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:45.761902  880465 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:45.761946  880465 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:44:45.762340  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:45.762379  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:45.778037  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I0520 12:44:45.778563  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:45.779180  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:45.779205  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:45.779555  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:45.779813  880465 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:44:45.780037  880465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:45.780065  880465 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:44:45.783147  880465 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:45.783578  880465 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:44:45.783606  880465 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:44:45.783792  880465 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:44:45.783971  880465 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:44:45.784132  880465 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:44:45.784321  880465 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:44:45.870397  880465 ssh_runner.go:195] Run: systemctl --version
	I0520 12:44:45.876785  880465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:45.892153  880465 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:45.892186  880465 api_server.go:166] Checking apiserver status ...
	I0520 12:44:45.892214  880465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:45.908953  880465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	W0520 12:44:45.919944  880465 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:45.920003  880465 ssh_runner.go:195] Run: ls
	I0520 12:44:45.925120  880465 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:45.929197  880465 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:45.929216  880465 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:44:45.929226  880465 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:45.929242  880465 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:44:45.929515  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:45.929547  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:45.944149  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0520 12:44:45.944624  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:45.945157  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:45.945176  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:45.945565  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:45.945732  880465 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:44:45.947236  880465 status.go:330] ha-252263-m02 host status = "Stopped" (err=<nil>)
	I0520 12:44:45.947254  880465 status.go:343] host is not running, skipping remaining checks
	I0520 12:44:45.947262  880465 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:45.947284  880465 status.go:255] checking status of ha-252263-m03 ...
	I0520 12:44:45.947566  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:45.947599  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:45.961820  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
	I0520 12:44:45.962183  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:45.962594  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:45.962617  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:45.962901  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:45.963091  880465 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:44:45.964554  880465 status.go:330] ha-252263-m03 host status = "Running" (err=<nil>)
	I0520 12:44:45.964574  880465 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:45.964850  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:45.964884  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:45.979611  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45901
	I0520 12:44:45.980106  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:45.980692  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:45.980719  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:45.981093  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:45.981315  880465 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:44:45.984249  880465 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:45.984660  880465 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:45.984688  880465 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:45.984850  880465 host.go:66] Checking if "ha-252263-m03" exists ...
	I0520 12:44:45.985158  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:45.985192  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:45.999460  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45553
	I0520 12:44:45.999852  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:46.000306  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:46.000325  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:46.000592  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:46.000760  880465 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:44:46.000968  880465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:46.001006  880465 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:44:46.003724  880465 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:46.004266  880465 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:46.004290  880465 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:46.004452  880465 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:44:46.004625  880465 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:44:46.004783  880465 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:44:46.004923  880465 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:44:46.086923  880465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:46.101694  880465 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:44:46.101729  880465 api_server.go:166] Checking apiserver status ...
	I0520 12:44:46.101770  880465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:44:46.115617  880465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup
	W0520 12:44:46.125653  880465 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1589/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:44:46.125731  880465 ssh_runner.go:195] Run: ls
	I0520 12:44:46.130630  880465 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:44:46.135059  880465 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:44:46.135079  880465 status.go:422] ha-252263-m03 apiserver status = Running (err=<nil>)
	I0520 12:44:46.135098  880465 status.go:257] ha-252263-m03 status: &{Name:ha-252263-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:44:46.135131  880465 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:44:46.135438  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:46.135479  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:46.150516  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0520 12:44:46.150922  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:46.151374  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:46.151396  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:46.151723  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:46.151928  880465 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:44:46.153391  880465 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:44:46.153410  880465 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:46.153772  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:46.153816  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:46.169549  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0520 12:44:46.169978  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:46.170397  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:46.170417  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:46.170729  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:46.170916  880465 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:44:46.173812  880465 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:46.174307  880465 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:46.174334  880465 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:46.174475  880465 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:44:46.174795  880465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:46.174835  880465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:46.189801  880465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I0520 12:44:46.190201  880465 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:46.190626  880465 main.go:141] libmachine: Using API Version  1
	I0520 12:44:46.190644  880465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:46.190964  880465 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:46.191145  880465 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:44:46.191349  880465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:44:46.191372  880465 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:44:46.194000  880465 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:46.194374  880465 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:46.194409  880465 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:46.194539  880465 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:44:46.194690  880465 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:44:46.194800  880465 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:44:46.194952  880465 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:44:46.274124  880465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:44:46.288542  880465 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-252263 -n ha-252263
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-252263 logs -n 25: (1.352927306s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263:/home/docker/cp-test_ha-252263-m03_ha-252263.txt                      |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263 sudo cat                                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263.txt                                |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m02:/home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m02 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04:/home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m04 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp testdata/cp-test.txt                                               | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263:/home/docker/cp-test_ha-252263-m04_ha-252263.txt                      |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263 sudo cat                                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263.txt                                |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m02:/home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m02 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03:/home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m03 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-252263 node stop m02 -v=7                                                    | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-252263 node start m02 -v=7                                                   | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:36:55
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:36:55.522714  874942 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:36:55.522874  874942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:36:55.522887  874942 out.go:304] Setting ErrFile to fd 2...
	I0520 12:36:55.522894  874942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:36:55.523072  874942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:36:55.523607  874942 out.go:298] Setting JSON to false
	I0520 12:36:55.524517  874942 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8363,"bootTime":1716200252,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:36:55.524575  874942 start.go:139] virtualization: kvm guest
	I0520 12:36:55.527010  874942 out.go:177] * [ha-252263] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:36:55.528911  874942 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 12:36:55.528891  874942 notify.go:220] Checking for updates...
	I0520 12:36:55.530376  874942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:36:55.532190  874942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:36:55.533798  874942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:36:55.535218  874942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:36:55.536593  874942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:36:55.537952  874942 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:36:55.572727  874942 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 12:36:55.574239  874942 start.go:297] selected driver: kvm2
	I0520 12:36:55.574259  874942 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:36:55.574285  874942 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:36:55.574963  874942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:36:55.575027  874942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:36:55.590038  874942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:36:55.590091  874942 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:36:55.590281  874942 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:36:55.590307  874942 cni.go:84] Creating CNI manager for ""
	I0520 12:36:55.590313  874942 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 12:36:55.590318  874942 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 12:36:55.590361  874942 start.go:340] cluster config:
	{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0520 12:36:55.590466  874942 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:36:55.592333  874942 out.go:177] * Starting "ha-252263" primary control-plane node in "ha-252263" cluster
	I0520 12:36:55.593688  874942 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:36:55.593726  874942 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:36:55.593737  874942 cache.go:56] Caching tarball of preloaded images
	I0520 12:36:55.593836  874942 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:36:55.593852  874942 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:36:55.594156  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:36:55.594179  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json: {Name:mka44a3102880bc08a5134e6709927ed82a08e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:36:55.594300  874942 start.go:360] acquireMachinesLock for ha-252263: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:36:55.594327  874942 start.go:364] duration metric: took 14.32µs to acquireMachinesLock for "ha-252263"
	I0520 12:36:55.594340  874942 start.go:93] Provisioning new machine with config: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:36:55.594393  874942 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 12:36:55.596074  874942 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 12:36:55.596211  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:36:55.596256  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:36:55.610363  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0520 12:36:55.610775  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:36:55.611351  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:36:55.611372  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:36:55.611698  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:36:55.611917  874942 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:36:55.612091  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:36:55.612238  874942 start.go:159] libmachine.API.Create for "ha-252263" (driver="kvm2")
	I0520 12:36:55.612270  874942 client.go:168] LocalClient.Create starting
	I0520 12:36:55.612299  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 12:36:55.612334  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:36:55.612347  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:36:55.612399  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 12:36:55.612416  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:36:55.612428  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:36:55.612443  874942 main.go:141] libmachine: Running pre-create checks...
	I0520 12:36:55.612453  874942 main.go:141] libmachine: (ha-252263) Calling .PreCreateCheck
	I0520 12:36:55.612849  874942 main.go:141] libmachine: (ha-252263) Calling .GetConfigRaw
	I0520 12:36:55.613200  874942 main.go:141] libmachine: Creating machine...
	I0520 12:36:55.613212  874942 main.go:141] libmachine: (ha-252263) Calling .Create
	I0520 12:36:55.613356  874942 main.go:141] libmachine: (ha-252263) Creating KVM machine...
	I0520 12:36:55.614585  874942 main.go:141] libmachine: (ha-252263) DBG | found existing default KVM network
	I0520 12:36:55.615317  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:55.615186  874965 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0520 12:36:55.615333  874942 main.go:141] libmachine: (ha-252263) DBG | created network xml: 
	I0520 12:36:55.615342  874942 main.go:141] libmachine: (ha-252263) DBG | <network>
	I0520 12:36:55.615347  874942 main.go:141] libmachine: (ha-252263) DBG |   <name>mk-ha-252263</name>
	I0520 12:36:55.615353  874942 main.go:141] libmachine: (ha-252263) DBG |   <dns enable='no'/>
	I0520 12:36:55.615357  874942 main.go:141] libmachine: (ha-252263) DBG |   
	I0520 12:36:55.615363  874942 main.go:141] libmachine: (ha-252263) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 12:36:55.615379  874942 main.go:141] libmachine: (ha-252263) DBG |     <dhcp>
	I0520 12:36:55.615388  874942 main.go:141] libmachine: (ha-252263) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 12:36:55.615393  874942 main.go:141] libmachine: (ha-252263) DBG |     </dhcp>
	I0520 12:36:55.615417  874942 main.go:141] libmachine: (ha-252263) DBG |   </ip>
	I0520 12:36:55.615434  874942 main.go:141] libmachine: (ha-252263) DBG |   
	I0520 12:36:55.615445  874942 main.go:141] libmachine: (ha-252263) DBG | </network>
	I0520 12:36:55.615454  874942 main.go:141] libmachine: (ha-252263) DBG | 
	I0520 12:36:55.620329  874942 main.go:141] libmachine: (ha-252263) DBG | trying to create private KVM network mk-ha-252263 192.168.39.0/24...
	I0520 12:36:55.682543  874942 main.go:141] libmachine: (ha-252263) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263 ...
	I0520 12:36:55.682589  874942 main.go:141] libmachine: (ha-252263) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:36:55.682601  874942 main.go:141] libmachine: (ha-252263) DBG | private KVM network mk-ha-252263 192.168.39.0/24 created
	I0520 12:36:55.682619  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:55.682449  874965 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:36:55.682643  874942 main.go:141] libmachine: (ha-252263) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:36:55.943494  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:55.943374  874965 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa...
	I0520 12:36:56.155305  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:56.155140  874965 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/ha-252263.rawdisk...
	I0520 12:36:56.155334  874942 main.go:141] libmachine: (ha-252263) DBG | Writing magic tar header
	I0520 12:36:56.155360  874942 main.go:141] libmachine: (ha-252263) DBG | Writing SSH key tar header
	I0520 12:36:56.155372  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:56.155274  874965 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263 ...
	I0520 12:36:56.155395  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263
	I0520 12:36:56.155431  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263 (perms=drwx------)
	I0520 12:36:56.155447  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 12:36:56.155455  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:36:56.155465  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 12:36:56.155472  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 12:36:56.155479  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:36:56.155485  874942 main.go:141] libmachine: (ha-252263) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:36:56.155492  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:36:56.155498  874942 main.go:141] libmachine: (ha-252263) Creating domain...
	I0520 12:36:56.155511  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 12:36:56.155516  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:36:56.155534  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:36:56.155553  874942 main.go:141] libmachine: (ha-252263) DBG | Checking permissions on dir: /home
	I0520 12:36:56.155565  874942 main.go:141] libmachine: (ha-252263) DBG | Skipping /home - not owner
	I0520 12:36:56.156717  874942 main.go:141] libmachine: (ha-252263) define libvirt domain using xml: 
	I0520 12:36:56.156728  874942 main.go:141] libmachine: (ha-252263) <domain type='kvm'>
	I0520 12:36:56.156740  874942 main.go:141] libmachine: (ha-252263)   <name>ha-252263</name>
	I0520 12:36:56.156745  874942 main.go:141] libmachine: (ha-252263)   <memory unit='MiB'>2200</memory>
	I0520 12:36:56.156751  874942 main.go:141] libmachine: (ha-252263)   <vcpu>2</vcpu>
	I0520 12:36:56.156755  874942 main.go:141] libmachine: (ha-252263)   <features>
	I0520 12:36:56.156760  874942 main.go:141] libmachine: (ha-252263)     <acpi/>
	I0520 12:36:56.156765  874942 main.go:141] libmachine: (ha-252263)     <apic/>
	I0520 12:36:56.156775  874942 main.go:141] libmachine: (ha-252263)     <pae/>
	I0520 12:36:56.156796  874942 main.go:141] libmachine: (ha-252263)     
	I0520 12:36:56.156808  874942 main.go:141] libmachine: (ha-252263)   </features>
	I0520 12:36:56.156825  874942 main.go:141] libmachine: (ha-252263)   <cpu mode='host-passthrough'>
	I0520 12:36:56.156835  874942 main.go:141] libmachine: (ha-252263)   
	I0520 12:36:56.156839  874942 main.go:141] libmachine: (ha-252263)   </cpu>
	I0520 12:36:56.156844  874942 main.go:141] libmachine: (ha-252263)   <os>
	I0520 12:36:56.156851  874942 main.go:141] libmachine: (ha-252263)     <type>hvm</type>
	I0520 12:36:56.156857  874942 main.go:141] libmachine: (ha-252263)     <boot dev='cdrom'/>
	I0520 12:36:56.156863  874942 main.go:141] libmachine: (ha-252263)     <boot dev='hd'/>
	I0520 12:36:56.156869  874942 main.go:141] libmachine: (ha-252263)     <bootmenu enable='no'/>
	I0520 12:36:56.156875  874942 main.go:141] libmachine: (ha-252263)   </os>
	I0520 12:36:56.156880  874942 main.go:141] libmachine: (ha-252263)   <devices>
	I0520 12:36:56.156887  874942 main.go:141] libmachine: (ha-252263)     <disk type='file' device='cdrom'>
	I0520 12:36:56.156894  874942 main.go:141] libmachine: (ha-252263)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/boot2docker.iso'/>
	I0520 12:36:56.156904  874942 main.go:141] libmachine: (ha-252263)       <target dev='hdc' bus='scsi'/>
	I0520 12:36:56.156934  874942 main.go:141] libmachine: (ha-252263)       <readonly/>
	I0520 12:36:56.156960  874942 main.go:141] libmachine: (ha-252263)     </disk>
	I0520 12:36:56.156995  874942 main.go:141] libmachine: (ha-252263)     <disk type='file' device='disk'>
	I0520 12:36:56.157020  874942 main.go:141] libmachine: (ha-252263)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:36:56.157048  874942 main.go:141] libmachine: (ha-252263)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/ha-252263.rawdisk'/>
	I0520 12:36:56.157059  874942 main.go:141] libmachine: (ha-252263)       <target dev='hda' bus='virtio'/>
	I0520 12:36:56.157070  874942 main.go:141] libmachine: (ha-252263)     </disk>
	I0520 12:36:56.157081  874942 main.go:141] libmachine: (ha-252263)     <interface type='network'>
	I0520 12:36:56.157096  874942 main.go:141] libmachine: (ha-252263)       <source network='mk-ha-252263'/>
	I0520 12:36:56.157113  874942 main.go:141] libmachine: (ha-252263)       <model type='virtio'/>
	I0520 12:36:56.157121  874942 main.go:141] libmachine: (ha-252263)     </interface>
	I0520 12:36:56.157125  874942 main.go:141] libmachine: (ha-252263)     <interface type='network'>
	I0520 12:36:56.157133  874942 main.go:141] libmachine: (ha-252263)       <source network='default'/>
	I0520 12:36:56.157137  874942 main.go:141] libmachine: (ha-252263)       <model type='virtio'/>
	I0520 12:36:56.157144  874942 main.go:141] libmachine: (ha-252263)     </interface>
	I0520 12:36:56.157148  874942 main.go:141] libmachine: (ha-252263)     <serial type='pty'>
	I0520 12:36:56.157156  874942 main.go:141] libmachine: (ha-252263)       <target port='0'/>
	I0520 12:36:56.157160  874942 main.go:141] libmachine: (ha-252263)     </serial>
	I0520 12:36:56.157168  874942 main.go:141] libmachine: (ha-252263)     <console type='pty'>
	I0520 12:36:56.157172  874942 main.go:141] libmachine: (ha-252263)       <target type='serial' port='0'/>
	I0520 12:36:56.157181  874942 main.go:141] libmachine: (ha-252263)     </console>
	I0520 12:36:56.157194  874942 main.go:141] libmachine: (ha-252263)     <rng model='virtio'>
	I0520 12:36:56.157212  874942 main.go:141] libmachine: (ha-252263)       <backend model='random'>/dev/random</backend>
	I0520 12:36:56.157224  874942 main.go:141] libmachine: (ha-252263)     </rng>
	I0520 12:36:56.157231  874942 main.go:141] libmachine: (ha-252263)     
	I0520 12:36:56.157241  874942 main.go:141] libmachine: (ha-252263)     
	I0520 12:36:56.157250  874942 main.go:141] libmachine: (ha-252263)   </devices>
	I0520 12:36:56.157260  874942 main.go:141] libmachine: (ha-252263) </domain>
	I0520 12:36:56.157272  874942 main.go:141] libmachine: (ha-252263) 
	I0520 12:36:56.161707  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:61:1a:1b in network default
	I0520 12:36:56.162323  874942 main.go:141] libmachine: (ha-252263) Ensuring networks are active...
	I0520 12:36:56.162338  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:56.163052  874942 main.go:141] libmachine: (ha-252263) Ensuring network default is active
	I0520 12:36:56.163402  874942 main.go:141] libmachine: (ha-252263) Ensuring network mk-ha-252263 is active
	I0520 12:36:56.163905  874942 main.go:141] libmachine: (ha-252263) Getting domain xml...
	I0520 12:36:56.164647  874942 main.go:141] libmachine: (ha-252263) Creating domain...
	I0520 12:36:57.336606  874942 main.go:141] libmachine: (ha-252263) Waiting to get IP...
	I0520 12:36:57.337492  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:57.337901  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:57.337948  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:57.337892  874965 retry.go:31] will retry after 268.398176ms: waiting for machine to come up
	I0520 12:36:57.608480  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:57.609017  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:57.609047  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:57.608953  874965 retry.go:31] will retry after 265.174618ms: waiting for machine to come up
	I0520 12:36:57.875542  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:57.876034  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:57.876070  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:57.875979  874965 retry.go:31] will retry after 479.627543ms: waiting for machine to come up
	I0520 12:36:58.357692  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:58.358108  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:58.358134  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:58.358071  874965 retry.go:31] will retry after 541.356153ms: waiting for machine to come up
	I0520 12:36:58.900870  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:58.901308  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:58.901338  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:58.901253  874965 retry.go:31] will retry after 533.411181ms: waiting for machine to come up
	I0520 12:36:59.436114  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:36:59.436492  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:36:59.436517  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:36:59.436445  874965 retry.go:31] will retry after 937.293304ms: waiting for machine to come up
	I0520 12:37:00.375519  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:00.375916  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:00.375948  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:00.375881  874965 retry.go:31] will retry after 1.113015434s: waiting for machine to come up
	I0520 12:37:01.490751  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:01.491160  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:01.491188  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:01.491106  874965 retry.go:31] will retry after 1.487308712s: waiting for machine to come up
	I0520 12:37:02.979983  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:02.980469  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:02.980503  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:02.980415  874965 retry.go:31] will retry after 1.285882127s: waiting for machine to come up
	I0520 12:37:04.267910  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:04.268417  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:04.268451  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:04.268344  874965 retry.go:31] will retry after 1.917962446s: waiting for machine to come up
	I0520 12:37:06.188323  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:06.188815  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:06.188859  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:06.188788  874965 retry.go:31] will retry after 1.809201113s: waiting for machine to come up
	I0520 12:37:07.999321  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:07.999724  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:07.999766  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:07.999684  874965 retry.go:31] will retry after 3.16325035s: waiting for machine to come up
	I0520 12:37:11.164245  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:11.164616  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:11.164638  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:11.164583  874965 retry.go:31] will retry after 3.344329876s: waiting for machine to come up
	I0520 12:37:14.512959  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:14.513408  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find current IP address of domain ha-252263 in network mk-ha-252263
	I0520 12:37:14.513433  874942 main.go:141] libmachine: (ha-252263) DBG | I0520 12:37:14.513355  874965 retry.go:31] will retry after 5.078434537s: waiting for machine to come up
	I0520 12:37:19.596279  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.596681  874942 main.go:141] libmachine: (ha-252263) Found IP for machine: 192.168.39.182
	I0520 12:37:19.596698  874942 main.go:141] libmachine: (ha-252263) Reserving static IP address...
	I0520 12:37:19.596707  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has current primary IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.597114  874942 main.go:141] libmachine: (ha-252263) DBG | unable to find host DHCP lease matching {name: "ha-252263", mac: "52:54:00:44:6e:b0", ip: "192.168.39.182"} in network mk-ha-252263
	I0520 12:37:19.667372  874942 main.go:141] libmachine: (ha-252263) Reserved static IP address: 192.168.39.182
	I0520 12:37:19.667400  874942 main.go:141] libmachine: (ha-252263) Waiting for SSH to be available...
	I0520 12:37:19.667411  874942 main.go:141] libmachine: (ha-252263) DBG | Getting to WaitForSSH function...
	I0520 12:37:19.669900  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.670286  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:19.670311  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.670463  874942 main.go:141] libmachine: (ha-252263) DBG | Using SSH client type: external
	I0520 12:37:19.670481  874942 main.go:141] libmachine: (ha-252263) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa (-rw-------)
	I0520 12:37:19.670525  874942 main.go:141] libmachine: (ha-252263) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:37:19.670535  874942 main.go:141] libmachine: (ha-252263) DBG | About to run SSH command:
	I0520 12:37:19.670543  874942 main.go:141] libmachine: (ha-252263) DBG | exit 0
	I0520 12:37:19.794871  874942 main.go:141] libmachine: (ha-252263) DBG | SSH cmd err, output: <nil>: 
	I0520 12:37:19.795202  874942 main.go:141] libmachine: (ha-252263) KVM machine creation complete!
	I0520 12:37:19.795457  874942 main.go:141] libmachine: (ha-252263) Calling .GetConfigRaw
	I0520 12:37:19.796005  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:19.796227  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:19.796378  874942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:37:19.796396  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:37:19.797861  874942 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:37:19.797888  874942 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:37:19.797895  874942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:37:19.797900  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:19.799825  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.800157  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:19.800183  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.800322  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:19.800500  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:19.800659  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:19.800812  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:19.800974  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:19.801242  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:19.801257  874942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:37:19.910063  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:37:19.910088  874942 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:37:19.910095  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:19.912584  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.912962  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:19.912993  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:19.913131  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:19.913312  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:19.913491  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:19.913630  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:19.913787  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:19.913960  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:19.913972  874942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:37:20.019419  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:37:20.019485  874942 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:37:20.019495  874942 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:37:20.019504  874942 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:37:20.019781  874942 buildroot.go:166] provisioning hostname "ha-252263"
	I0520 12:37:20.019806  874942 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:37:20.019997  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.022669  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.023018  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.023039  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.023229  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:20.023399  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.023533  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.023638  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:20.023804  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:20.024021  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:20.024038  874942 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-252263 && echo "ha-252263" | sudo tee /etc/hostname
	I0520 12:37:20.144540  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263
	
	I0520 12:37:20.144573  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.147155  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.147543  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.147582  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.147775  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:20.147976  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.148139  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.148239  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:20.148364  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:20.148562  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:20.148579  874942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-252263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-252263/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-252263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:37:20.263591  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:37:20.263620  874942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 12:37:20.263663  874942 buildroot.go:174] setting up certificates
	I0520 12:37:20.263675  874942 provision.go:84] configureAuth start
	I0520 12:37:20.263688  874942 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:37:20.264004  874942 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:37:20.266512  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.266893  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.266924  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.267035  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.269193  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.269516  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.269542  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.269647  874942 provision.go:143] copyHostCerts
	I0520 12:37:20.269679  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:37:20.269709  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 12:37:20.269719  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:37:20.269782  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 12:37:20.269887  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:37:20.269908  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 12:37:20.269916  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:37:20.269942  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 12:37:20.269996  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:37:20.270013  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 12:37:20.270020  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:37:20.270040  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 12:37:20.270105  874942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.ha-252263 san=[127.0.0.1 192.168.39.182 ha-252263 localhost minikube]
	I0520 12:37:20.653179  874942 provision.go:177] copyRemoteCerts
	I0520 12:37:20.653240  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:37:20.653271  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.655925  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.656232  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.656265  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.656399  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:20.656583  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.656742  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:20.656915  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:20.741094  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 12:37:20.741182  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:37:20.765713  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 12:37:20.765806  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 12:37:20.789218  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 12:37:20.789295  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 12:37:20.812328  874942 provision.go:87] duration metric: took 548.635907ms to configureAuth
	I0520 12:37:20.812359  874942 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:37:20.812547  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:37:20.812628  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:20.815236  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.815567  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:20.815605  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:20.815802  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:20.816015  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.816188  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:20.816317  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:20.816496  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:20.816673  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:20.816689  874942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:37:21.075709  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:37:21.075746  874942 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:37:21.075759  874942 main.go:141] libmachine: (ha-252263) Calling .GetURL
	I0520 12:37:21.076990  874942 main.go:141] libmachine: (ha-252263) DBG | Using libvirt version 6000000
	I0520 12:37:21.079432  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.079759  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.079781  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.079969  874942 main.go:141] libmachine: Docker is up and running!
	I0520 12:37:21.079985  874942 main.go:141] libmachine: Reticulating splines...
	I0520 12:37:21.079994  874942 client.go:171] duration metric: took 25.467715983s to LocalClient.Create
	I0520 12:37:21.080021  874942 start.go:167] duration metric: took 25.467784578s to libmachine.API.Create "ha-252263"
	I0520 12:37:21.080032  874942 start.go:293] postStartSetup for "ha-252263" (driver="kvm2")
	I0520 12:37:21.080046  874942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:37:21.080070  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.080296  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:37:21.080320  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:21.082882  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.083291  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.083323  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.083402  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:21.083580  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.083765  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:21.083895  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:21.164840  874942 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:37:21.169194  874942 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:37:21.169222  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 12:37:21.169303  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 12:37:21.169404  874942 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 12:37:21.169417  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 12:37:21.169516  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:37:21.178738  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:37:21.201779  874942 start.go:296] duration metric: took 121.733252ms for postStartSetup
	I0520 12:37:21.201849  874942 main.go:141] libmachine: (ha-252263) Calling .GetConfigRaw
	I0520 12:37:21.202463  874942 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:37:21.205067  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.205409  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.205429  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.205735  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:37:21.205924  874942 start.go:128] duration metric: took 25.611519662s to createHost
	I0520 12:37:21.205950  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:21.208406  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.208797  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.208821  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.208984  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:21.209123  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.209251  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.209410  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:21.209551  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:37:21.209699  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:37:21.209712  874942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:37:21.315330  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716208641.294732996
	
	I0520 12:37:21.315362  874942 fix.go:216] guest clock: 1716208641.294732996
	I0520 12:37:21.315369  874942 fix.go:229] Guest: 2024-05-20 12:37:21.294732996 +0000 UTC Remote: 2024-05-20 12:37:21.205935394 +0000 UTC m=+25.717718406 (delta=88.797602ms)
	I0520 12:37:21.315421  874942 fix.go:200] guest clock delta is within tolerance: 88.797602ms
	I0520 12:37:21.315430  874942 start.go:83] releasing machines lock for "ha-252263", held for 25.721096085s
	I0520 12:37:21.315459  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.315708  874942 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:37:21.318184  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.318471  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.318495  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.318625  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.319172  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.319378  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:21.319453  874942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:37:21.319512  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:21.319566  874942 ssh_runner.go:195] Run: cat /version.json
	I0520 12:37:21.319587  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:21.322135  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.322360  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.322452  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.322469  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.322641  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:21.322779  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:21.322804  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:21.322813  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.322953  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:21.323025  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:21.323262  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:21.323304  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:21.323432  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:21.323551  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	W0520 12:37:21.399625  874942 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 12:37:21.399753  874942 ssh_runner.go:195] Run: systemctl --version
	I0520 12:37:21.422497  874942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:37:21.580652  874942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:37:21.587373  874942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:37:21.587432  874942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:37:21.603467  874942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:37:21.603492  874942 start.go:494] detecting cgroup driver to use...
	I0520 12:37:21.603586  874942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:37:21.620241  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:37:21.633588  874942 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:37:21.633633  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:37:21.646258  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:37:21.658897  874942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:37:21.773627  874942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:37:21.918419  874942 docker.go:233] disabling docker service ...
	I0520 12:37:21.918505  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:37:21.932965  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:37:21.945987  874942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:37:22.094195  874942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:37:22.214741  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:37:22.228699  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:37:22.246794  874942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:37:22.246880  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.256699  874942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:37:22.256760  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.266482  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.276338  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.286282  874942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:37:22.297032  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.306862  874942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.323683  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:37:22.333323  874942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:37:22.342159  874942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:37:22.342213  874942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:37:22.354542  874942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:37:22.364062  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:37:22.475186  874942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:37:22.607916  874942 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:37:22.607997  874942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:37:22.612724  874942 start.go:562] Will wait 60s for crictl version
	I0520 12:37:22.612888  874942 ssh_runner.go:195] Run: which crictl
	I0520 12:37:22.616685  874942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:37:22.655818  874942 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:37:22.655892  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:37:22.682470  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:37:22.711824  874942 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:37:22.712837  874942 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:37:22.715583  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:22.715925  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:22.715954  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:22.716133  874942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:37:22.720175  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:37:22.733326  874942 kubeadm.go:877] updating cluster {Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 12:37:22.733449  874942 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:37:22.733508  874942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:37:22.765473  874942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 12:37:22.765541  874942 ssh_runner.go:195] Run: which lz4
	I0520 12:37:22.769431  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 12:37:22.769515  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 12:37:22.773682  874942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 12:37:22.773713  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 12:37:24.141913  874942 crio.go:462] duration metric: took 1.372417849s to copy over tarball
	I0520 12:37:24.141993  874942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 12:37:26.194872  874942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.052823547s)
	I0520 12:37:26.194904  874942 crio.go:469] duration metric: took 2.052964592s to extract the tarball
	I0520 12:37:26.194914  874942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 12:37:26.238270  874942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:37:26.294122  874942 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:37:26.294148  874942 cache_images.go:84] Images are preloaded, skipping loading
	I0520 12:37:26.294157  874942 kubeadm.go:928] updating node { 192.168.39.182 8443 v1.30.1 crio true true} ...
	I0520 12:37:26.294285  874942 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-252263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:37:26.294378  874942 ssh_runner.go:195] Run: crio config
	I0520 12:37:26.338350  874942 cni.go:84] Creating CNI manager for ""
	I0520 12:37:26.338372  874942 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 12:37:26.338389  874942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 12:37:26.338416  874942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-252263 NodeName:ha-252263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 12:37:26.338561  874942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-252263"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 12:37:26.338584  874942 kube-vip.go:115] generating kube-vip config ...
	I0520 12:37:26.338627  874942 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 12:37:26.355710  874942 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 12:37:26.355831  874942 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0520 12:37:26.355884  874942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:37:26.365744  874942 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 12:37:26.365796  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 12:37:26.374782  874942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 12:37:26.390579  874942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:37:26.406515  874942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 12:37:26.422366  874942 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0520 12:37:26.438142  874942 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 12:37:26.441986  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:37:26.453891  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:37:26.576120  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:37:26.592196  874942 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263 for IP: 192.168.39.182
	I0520 12:37:26.592225  874942 certs.go:194] generating shared ca certs ...
	I0520 12:37:26.592248  874942 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:26.592433  874942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 12:37:26.592492  874942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 12:37:26.592506  874942 certs.go:256] generating profile certs ...
	I0520 12:37:26.592570  874942 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key
	I0520 12:37:26.592591  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt with IP's: []
	I0520 12:37:26.812850  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt ...
	I0520 12:37:26.812881  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt: {Name:mk923141f1efb3fc32fe7a6617fae7374249c3d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:26.813071  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key ...
	I0520 12:37:26.813086  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key: {Name:mkb137e09f84f93aec1540f80bb1a50c72c56e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:26.813193  874942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.905fa629
	I0520 12:37:26.813209  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.905fa629 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182 192.168.39.254]
	I0520 12:37:27.078051  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.905fa629 ...
	I0520 12:37:27.078085  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.905fa629: {Name:mkf853b6980b0a5db71ada545009422aa97c9cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:27.078262  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.905fa629 ...
	I0520 12:37:27.078280  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.905fa629: {Name:mk8e8df3bf7473c3e59d67197fa4da96247d6a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:27.078372  874942 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.905fa629 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt
	I0520 12:37:27.078448  874942 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.905fa629 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key
	I0520 12:37:27.078499  874942 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key
	I0520 12:37:27.078514  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt with IP's: []
	I0520 12:37:27.298184  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt ...
	I0520 12:37:27.298213  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt: {Name:mk2dfcf554fe922a6ee5776cd9fb5b4a108a69cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:27.298395  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key ...
	I0520 12:37:27.298409  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key: {Name:mkc3425ae95d7b09a44694b623a43120e707d763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:27.298502  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 12:37:27.298521  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 12:37:27.298533  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 12:37:27.298545  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 12:37:27.298557  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 12:37:27.298570  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 12:37:27.298581  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 12:37:27.298593  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 12:37:27.298641  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 12:37:27.298684  874942 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 12:37:27.298694  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 12:37:27.298718  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 12:37:27.298740  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:37:27.298761  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 12:37:27.298801  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:37:27.298829  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 12:37:27.298862  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 12:37:27.298881  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:37:27.299432  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:37:27.326594  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:37:27.352290  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:37:27.381601  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 12:37:27.406856  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 12:37:27.433992  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 12:37:27.456827  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:37:27.479597  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:37:27.502409  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 12:37:27.524924  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 12:37:27.547140  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:37:27.569852  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 12:37:27.586389  874942 ssh_runner.go:195] Run: openssl version
	I0520 12:37:27.594612  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 12:37:27.605833  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 12:37:27.610706  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 12:37:27.610759  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 12:37:27.617003  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 12:37:27.629753  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:37:27.640985  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:37:27.645841  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:37:27.645897  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:37:27.651895  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:37:27.662740  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 12:37:27.673662  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 12:37:27.678449  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 12:37:27.678492  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 12:37:27.684369  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 12:37:27.695699  874942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:37:27.700059  874942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:37:27.700108  874942 kubeadm.go:391] StartCluster: {Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:37:27.700194  874942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 12:37:27.700245  874942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 12:37:27.740509  874942 cri.go:89] found id: ""
	I0520 12:37:27.740597  874942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 12:37:27.750943  874942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 12:37:27.760648  874942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 12:37:27.770535  874942 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 12:37:27.770574  874942 kubeadm.go:156] found existing configuration files:
	
	I0520 12:37:27.770618  874942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 12:37:27.779837  874942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 12:37:27.779914  874942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 12:37:27.789350  874942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 12:37:27.798196  874942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 12:37:27.798250  874942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 12:37:27.807379  874942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 12:37:27.816387  874942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 12:37:27.816433  874942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 12:37:27.825766  874942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 12:37:27.835240  874942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 12:37:27.835293  874942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 12:37:27.844678  874942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 12:37:27.965545  874942 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 12:37:27.965651  874942 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 12:37:28.082992  874942 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 12:37:28.083131  874942 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 12:37:28.083233  874942 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 12:37:28.296410  874942 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 12:37:28.558599  874942 out.go:204]   - Generating certificates and keys ...
	I0520 12:37:28.558745  874942 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 12:37:28.558822  874942 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 12:37:28.558982  874942 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 12:37:28.606816  874942 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 12:37:28.675861  874942 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 12:37:28.922702  874942 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 12:37:29.011333  874942 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 12:37:29.011483  874942 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-252263 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0520 12:37:29.206710  874942 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 12:37:29.207038  874942 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-252263 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0520 12:37:29.263571  874942 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 12:37:29.504741  874942 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 12:37:29.548497  874942 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 12:37:29.548782  874942 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 12:37:29.973346  874942 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 12:37:30.377729  874942 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 12:37:30.444622  874942 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 12:37:30.545797  874942 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 12:37:30.604806  874942 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 12:37:30.604912  874942 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 12:37:30.605011  874942 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 12:37:30.606359  874942 out.go:204]   - Booting up control plane ...
	I0520 12:37:30.606459  874942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 12:37:30.606545  874942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 12:37:30.606627  874942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 12:37:30.627315  874942 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 12:37:30.628993  874942 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 12:37:30.629064  874942 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 12:37:30.771523  874942 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 12:37:30.771651  874942 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 12:37:31.272144  874942 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.019925ms
	I0520 12:37:31.272254  874942 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 12:37:37.211539  874942 kubeadm.go:309] [api-check] The API server is healthy after 5.942108324s
	I0520 12:37:37.230286  874942 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 12:37:37.241865  874942 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 12:37:37.269611  874942 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 12:37:37.269795  874942 kubeadm.go:309] [mark-control-plane] Marking the node ha-252263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 12:37:37.282536  874942 kubeadm.go:309] [bootstrap-token] Using token: p522o0.g86oczkum8u4xbvc
	I0520 12:37:37.283911  874942 out.go:204]   - Configuring RBAC rules ...
	I0520 12:37:37.284015  874942 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 12:37:37.309231  874942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 12:37:37.316659  874942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 12:37:37.319425  874942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 12:37:37.322967  874942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 12:37:37.325994  874942 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 12:37:37.620791  874942 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 12:37:38.075905  874942 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 12:37:38.623500  874942 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 12:37:38.624371  874942 kubeadm.go:309] 
	I0520 12:37:38.624439  874942 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 12:37:38.624450  874942 kubeadm.go:309] 
	I0520 12:37:38.624522  874942 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 12:37:38.624531  874942 kubeadm.go:309] 
	I0520 12:37:38.624573  874942 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 12:37:38.624628  874942 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 12:37:38.624724  874942 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 12:37:38.624750  874942 kubeadm.go:309] 
	I0520 12:37:38.624809  874942 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 12:37:38.624834  874942 kubeadm.go:309] 
	I0520 12:37:38.624904  874942 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 12:37:38.624915  874942 kubeadm.go:309] 
	I0520 12:37:38.624987  874942 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 12:37:38.625087  874942 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 12:37:38.625177  874942 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 12:37:38.625190  874942 kubeadm.go:309] 
	I0520 12:37:38.625294  874942 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 12:37:38.625393  874942 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 12:37:38.625401  874942 kubeadm.go:309] 
	I0520 12:37:38.625504  874942 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p522o0.g86oczkum8u4xbvc \
	I0520 12:37:38.625640  874942 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 \
	I0520 12:37:38.625665  874942 kubeadm.go:309] 	--control-plane 
	I0520 12:37:38.625669  874942 kubeadm.go:309] 
	I0520 12:37:38.625743  874942 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 12:37:38.625751  874942 kubeadm.go:309] 
	I0520 12:37:38.625821  874942 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p522o0.g86oczkum8u4xbvc \
	I0520 12:37:38.625905  874942 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 
	I0520 12:37:38.626730  874942 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 12:37:38.626758  874942 cni.go:84] Creating CNI manager for ""
	I0520 12:37:38.626767  874942 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 12:37:38.628330  874942 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 12:37:38.629563  874942 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 12:37:38.635041  874942 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 12:37:38.635063  874942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 12:37:38.653215  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 12:37:38.996013  874942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 12:37:38.996123  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:38.996163  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-252263 minikube.k8s.io/updated_at=2024_05_20T12_37_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=ha-252263 minikube.k8s.io/primary=true
	I0520 12:37:39.197259  874942 ops.go:34] apiserver oom_adj: -16
	I0520 12:37:39.210739  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:39.711433  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:40.211152  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:40.711747  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:41.211750  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:41.711621  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:42.210990  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:42.711310  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:43.211778  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:43.711715  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:44.210984  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:44.711031  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:45.211225  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:45.711429  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:46.211002  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:46.711628  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:47.211522  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:47.710836  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:48.211782  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:48.711468  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:49.211155  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:49.711475  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:50.211745  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:50.710800  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:51.210889  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:37:51.301254  874942 kubeadm.go:1107] duration metric: took 12.305202177s to wait for elevateKubeSystemPrivileges
	W0520 12:37:51.301299  874942 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 12:37:51.301309  874942 kubeadm.go:393] duration metric: took 23.601205588s to StartCluster
	I0520 12:37:51.301333  874942 settings.go:142] acquiring lock: {Name:mk4281d9011919f2beed93cad1a6e2e67e70984f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:51.301428  874942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:37:51.302351  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/kubeconfig: {Name:mk53b7329389b23289bbec52de9b56d2ade0e6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:37:51.302610  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 12:37:51.302630  874942 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:37:51.302659  874942 start.go:240] waiting for startup goroutines ...
	I0520 12:37:51.302673  874942 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 12:37:51.302736  874942 addons.go:69] Setting storage-provisioner=true in profile "ha-252263"
	I0520 12:37:51.302746  874942 addons.go:69] Setting default-storageclass=true in profile "ha-252263"
	I0520 12:37:51.302779  874942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-252263"
	I0520 12:37:51.302780  874942 addons.go:234] Setting addon storage-provisioner=true in "ha-252263"
	I0520 12:37:51.302911  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:37:51.302918  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:37:51.303193  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.303230  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.303282  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.303316  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.318749  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44953
	I0520 12:37:51.319064  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I0520 12:37:51.319291  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.319463  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.319831  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.319852  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.319992  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.320021  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.320183  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.320372  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:37:51.320393  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.320890  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.320934  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.322540  874942 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:37:51.322896  874942 kapi.go:59] client config for ha-252263: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt", KeyFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key", CAFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 12:37:51.323415  874942 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 12:37:51.323687  874942 addons.go:234] Setting addon default-storageclass=true in "ha-252263"
	I0520 12:37:51.323732  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:37:51.324104  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.324147  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.335881  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0520 12:37:51.336310  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.336828  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.336850  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.337248  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.337486  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:37:51.338973  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I0520 12:37:51.339236  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:51.339406  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.341013  874942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 12:37:51.339847  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.341041  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.342378  874942 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:37:51.342399  874942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 12:37:51.342418  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:51.342654  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.343273  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:51.343304  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:51.345687  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:51.346200  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:51.346222  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:51.346374  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:51.346549  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:51.346750  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:51.346933  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:51.358899  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43227
	I0520 12:37:51.359392  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:51.359932  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:51.359953  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:51.360245  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:51.360439  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:37:51.361971  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:37:51.362209  874942 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 12:37:51.362228  874942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 12:37:51.362248  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:37:51.364838  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:51.365281  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:37:51.365309  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:37:51.365425  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:37:51.365601  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:37:51.365729  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:37:51.365888  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:37:51.504230  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 12:37:51.504898  874942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 12:37:51.560126  874942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 12:37:52.337307  874942 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 12:37:52.337413  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.337437  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.337476  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.337498  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.337790  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.337806  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.337815  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.337823  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.337866  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.337886  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.337895  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.337906  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.337874  874942 main.go:141] libmachine: (ha-252263) DBG | Closing plugin on server side
	I0520 12:37:52.338081  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.338097  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.338187  874942 main.go:141] libmachine: (ha-252263) DBG | Closing plugin on server side
	I0520 12:37:52.338229  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.338254  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.338434  874942 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 12:37:52.338449  874942 round_trippers.go:469] Request Headers:
	I0520 12:37:52.338459  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:37:52.338470  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:37:52.351140  874942 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 12:37:52.351883  874942 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 12:37:52.351907  874942 round_trippers.go:469] Request Headers:
	I0520 12:37:52.351918  874942 round_trippers.go:473]     Content-Type: application/json
	I0520 12:37:52.351924  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:37:52.351927  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:37:52.355049  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:37:52.355311  874942 main.go:141] libmachine: Making call to close driver server
	I0520 12:37:52.355327  874942 main.go:141] libmachine: (ha-252263) Calling .Close
	I0520 12:37:52.355614  874942 main.go:141] libmachine: Successfully made call to close driver server
	I0520 12:37:52.355636  874942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 12:37:52.357649  874942 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 12:37:52.358925  874942 addons.go:505] duration metric: took 1.056246899s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 12:37:52.358973  874942 start.go:245] waiting for cluster config update ...
	I0520 12:37:52.358992  874942 start.go:254] writing updated cluster config ...
	I0520 12:37:52.360660  874942 out.go:177] 
	I0520 12:37:52.362311  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:37:52.362401  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:37:52.364079  874942 out.go:177] * Starting "ha-252263-m02" control-plane node in "ha-252263" cluster
	I0520 12:37:52.365509  874942 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:37:52.365540  874942 cache.go:56] Caching tarball of preloaded images
	I0520 12:37:52.365637  874942 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:37:52.365650  874942 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:37:52.365746  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:37:52.365933  874942 start.go:360] acquireMachinesLock for ha-252263-m02: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:37:52.365996  874942 start.go:364] duration metric: took 42.379µs to acquireMachinesLock for "ha-252263-m02"
	I0520 12:37:52.366019  874942 start.go:93] Provisioning new machine with config: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:37:52.366080  874942 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0520 12:37:52.367595  874942 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 12:37:52.367684  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:37:52.367715  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:37:52.382211  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0520 12:37:52.382574  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:37:52.383165  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:37:52.383187  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:37:52.383527  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:37:52.383710  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetMachineName
	I0520 12:37:52.383859  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:37:52.383997  874942 start.go:159] libmachine.API.Create for "ha-252263" (driver="kvm2")
	I0520 12:37:52.384018  874942 client.go:168] LocalClient.Create starting
	I0520 12:37:52.384052  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 12:37:52.384082  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:37:52.384102  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:37:52.384159  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 12:37:52.384177  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:37:52.384187  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:37:52.384203  874942 main.go:141] libmachine: Running pre-create checks...
	I0520 12:37:52.384211  874942 main.go:141] libmachine: (ha-252263-m02) Calling .PreCreateCheck
	I0520 12:37:52.384379  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetConfigRaw
	I0520 12:37:52.384792  874942 main.go:141] libmachine: Creating machine...
	I0520 12:37:52.384813  874942 main.go:141] libmachine: (ha-252263-m02) Calling .Create
	I0520 12:37:52.384981  874942 main.go:141] libmachine: (ha-252263-m02) Creating KVM machine...
	I0520 12:37:52.386304  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found existing default KVM network
	I0520 12:37:52.386441  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found existing private KVM network mk-ha-252263
	I0520 12:37:52.386551  874942 main.go:141] libmachine: (ha-252263-m02) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02 ...
	I0520 12:37:52.386572  874942 main.go:141] libmachine: (ha-252263-m02) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:37:52.386658  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:52.386550  875352 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:37:52.386737  874942 main.go:141] libmachine: (ha-252263-m02) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:37:52.644510  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:52.644386  875352 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa...
	I0520 12:37:52.885915  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:52.885793  875352 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/ha-252263-m02.rawdisk...
	I0520 12:37:52.885948  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Writing magic tar header
	I0520 12:37:52.885970  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Writing SSH key tar header
	I0520 12:37:52.885986  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:52.885927  875352 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02 ...
	I0520 12:37:52.886062  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02
	I0520 12:37:52.886099  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 12:37:52.886118  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02 (perms=drwx------)
	I0520 12:37:52.886137  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:37:52.886152  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 12:37:52.886172  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 12:37:52.886193  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:37:52.886208  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:37:52.886228  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 12:37:52.886242  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:37:52.886256  874942 main.go:141] libmachine: (ha-252263-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:37:52.886269  874942 main.go:141] libmachine: (ha-252263-m02) Creating domain...
	I0520 12:37:52.886388  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:37:52.886409  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Checking permissions on dir: /home
	I0520 12:37:52.886422  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Skipping /home - not owner
	I0520 12:37:52.887324  874942 main.go:141] libmachine: (ha-252263-m02) define libvirt domain using xml: 
	I0520 12:37:52.887347  874942 main.go:141] libmachine: (ha-252263-m02) <domain type='kvm'>
	I0520 12:37:52.887358  874942 main.go:141] libmachine: (ha-252263-m02)   <name>ha-252263-m02</name>
	I0520 12:37:52.887366  874942 main.go:141] libmachine: (ha-252263-m02)   <memory unit='MiB'>2200</memory>
	I0520 12:37:52.887378  874942 main.go:141] libmachine: (ha-252263-m02)   <vcpu>2</vcpu>
	I0520 12:37:52.887390  874942 main.go:141] libmachine: (ha-252263-m02)   <features>
	I0520 12:37:52.887399  874942 main.go:141] libmachine: (ha-252263-m02)     <acpi/>
	I0520 12:37:52.887406  874942 main.go:141] libmachine: (ha-252263-m02)     <apic/>
	I0520 12:37:52.887411  874942 main.go:141] libmachine: (ha-252263-m02)     <pae/>
	I0520 12:37:52.887417  874942 main.go:141] libmachine: (ha-252263-m02)     
	I0520 12:37:52.887423  874942 main.go:141] libmachine: (ha-252263-m02)   </features>
	I0520 12:37:52.887434  874942 main.go:141] libmachine: (ha-252263-m02)   <cpu mode='host-passthrough'>
	I0520 12:37:52.887455  874942 main.go:141] libmachine: (ha-252263-m02)   
	I0520 12:37:52.887469  874942 main.go:141] libmachine: (ha-252263-m02)   </cpu>
	I0520 12:37:52.887478  874942 main.go:141] libmachine: (ha-252263-m02)   <os>
	I0520 12:37:52.887493  874942 main.go:141] libmachine: (ha-252263-m02)     <type>hvm</type>
	I0520 12:37:52.887502  874942 main.go:141] libmachine: (ha-252263-m02)     <boot dev='cdrom'/>
	I0520 12:37:52.887507  874942 main.go:141] libmachine: (ha-252263-m02)     <boot dev='hd'/>
	I0520 12:37:52.887513  874942 main.go:141] libmachine: (ha-252263-m02)     <bootmenu enable='no'/>
	I0520 12:37:52.887519  874942 main.go:141] libmachine: (ha-252263-m02)   </os>
	I0520 12:37:52.887526  874942 main.go:141] libmachine: (ha-252263-m02)   <devices>
	I0520 12:37:52.887534  874942 main.go:141] libmachine: (ha-252263-m02)     <disk type='file' device='cdrom'>
	I0520 12:37:52.887546  874942 main.go:141] libmachine: (ha-252263-m02)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/boot2docker.iso'/>
	I0520 12:37:52.887553  874942 main.go:141] libmachine: (ha-252263-m02)       <target dev='hdc' bus='scsi'/>
	I0520 12:37:52.887559  874942 main.go:141] libmachine: (ha-252263-m02)       <readonly/>
	I0520 12:37:52.887566  874942 main.go:141] libmachine: (ha-252263-m02)     </disk>
	I0520 12:37:52.887571  874942 main.go:141] libmachine: (ha-252263-m02)     <disk type='file' device='disk'>
	I0520 12:37:52.887577  874942 main.go:141] libmachine: (ha-252263-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:37:52.887603  874942 main.go:141] libmachine: (ha-252263-m02)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/ha-252263-m02.rawdisk'/>
	I0520 12:37:52.887624  874942 main.go:141] libmachine: (ha-252263-m02)       <target dev='hda' bus='virtio'/>
	I0520 12:37:52.887631  874942 main.go:141] libmachine: (ha-252263-m02)     </disk>
	I0520 12:37:52.887638  874942 main.go:141] libmachine: (ha-252263-m02)     <interface type='network'>
	I0520 12:37:52.887645  874942 main.go:141] libmachine: (ha-252263-m02)       <source network='mk-ha-252263'/>
	I0520 12:37:52.887652  874942 main.go:141] libmachine: (ha-252263-m02)       <model type='virtio'/>
	I0520 12:37:52.887657  874942 main.go:141] libmachine: (ha-252263-m02)     </interface>
	I0520 12:37:52.887664  874942 main.go:141] libmachine: (ha-252263-m02)     <interface type='network'>
	I0520 12:37:52.887669  874942 main.go:141] libmachine: (ha-252263-m02)       <source network='default'/>
	I0520 12:37:52.887677  874942 main.go:141] libmachine: (ha-252263-m02)       <model type='virtio'/>
	I0520 12:37:52.887682  874942 main.go:141] libmachine: (ha-252263-m02)     </interface>
	I0520 12:37:52.887687  874942 main.go:141] libmachine: (ha-252263-m02)     <serial type='pty'>
	I0520 12:37:52.887699  874942 main.go:141] libmachine: (ha-252263-m02)       <target port='0'/>
	I0520 12:37:52.887712  874942 main.go:141] libmachine: (ha-252263-m02)     </serial>
	I0520 12:37:52.887722  874942 main.go:141] libmachine: (ha-252263-m02)     <console type='pty'>
	I0520 12:37:52.887733  874942 main.go:141] libmachine: (ha-252263-m02)       <target type='serial' port='0'/>
	I0520 12:37:52.887744  874942 main.go:141] libmachine: (ha-252263-m02)     </console>
	I0520 12:37:52.887754  874942 main.go:141] libmachine: (ha-252263-m02)     <rng model='virtio'>
	I0520 12:37:52.887771  874942 main.go:141] libmachine: (ha-252263-m02)       <backend model='random'>/dev/random</backend>
	I0520 12:37:52.887784  874942 main.go:141] libmachine: (ha-252263-m02)     </rng>
	I0520 12:37:52.887792  874942 main.go:141] libmachine: (ha-252263-m02)     
	I0520 12:37:52.887796  874942 main.go:141] libmachine: (ha-252263-m02)     
	I0520 12:37:52.887802  874942 main.go:141] libmachine: (ha-252263-m02)   </devices>
	I0520 12:37:52.887806  874942 main.go:141] libmachine: (ha-252263-m02) </domain>
	I0520 12:37:52.887816  874942 main.go:141] libmachine: (ha-252263-m02) 
	I0520 12:37:52.894397  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:86:3e:c8 in network default
	I0520 12:37:52.894920  874942 main.go:141] libmachine: (ha-252263-m02) Ensuring networks are active...
	I0520 12:37:52.894936  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:52.895632  874942 main.go:141] libmachine: (ha-252263-m02) Ensuring network default is active
	I0520 12:37:52.895903  874942 main.go:141] libmachine: (ha-252263-m02) Ensuring network mk-ha-252263 is active
	I0520 12:37:52.896228  874942 main.go:141] libmachine: (ha-252263-m02) Getting domain xml...
	I0520 12:37:52.896938  874942 main.go:141] libmachine: (ha-252263-m02) Creating domain...
	I0520 12:37:54.137521  874942 main.go:141] libmachine: (ha-252263-m02) Waiting to get IP...
	I0520 12:37:54.138340  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:54.138744  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:54.138800  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:54.138726  875352 retry.go:31] will retry after 192.479928ms: waiting for machine to come up
	I0520 12:37:54.333310  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:54.333806  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:54.333838  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:54.333745  875352 retry.go:31] will retry after 325.539642ms: waiting for machine to come up
	I0520 12:37:54.660916  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:54.661370  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:54.661395  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:54.661314  875352 retry.go:31] will retry after 338.837064ms: waiting for machine to come up
	I0520 12:37:55.001819  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:55.002266  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:55.002297  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:55.002214  875352 retry.go:31] will retry after 573.584149ms: waiting for machine to come up
	I0520 12:37:55.577088  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:55.577722  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:55.577755  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:55.577579  875352 retry.go:31] will retry after 487.137601ms: waiting for machine to come up
	I0520 12:37:56.066173  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:56.066713  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:56.066750  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:56.066643  875352 retry.go:31] will retry after 619.061485ms: waiting for machine to come up
	I0520 12:37:56.686886  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:56.687348  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:56.687377  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:56.687285  875352 retry.go:31] will retry after 1.172165578s: waiting for machine to come up
	I0520 12:37:57.861266  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:57.861789  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:57.861836  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:57.861748  875352 retry.go:31] will retry after 1.198369396s: waiting for machine to come up
	I0520 12:37:59.061207  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:37:59.061666  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:37:59.061695  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:37:59.061607  875352 retry.go:31] will retry after 1.159246595s: waiting for machine to come up
	I0520 12:38:00.222945  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:00.223295  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:00.223323  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:00.223248  875352 retry.go:31] will retry after 1.591878155s: waiting for machine to come up
	I0520 12:38:01.816669  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:01.817147  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:01.817186  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:01.817078  875352 retry.go:31] will retry after 2.342714609s: waiting for machine to come up
	I0520 12:38:04.160937  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:04.161348  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:04.161372  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:04.161308  875352 retry.go:31] will retry after 2.689545134s: waiting for machine to come up
	I0520 12:38:06.852983  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:06.853350  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:06.853381  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:06.853309  875352 retry.go:31] will retry after 3.47993687s: waiting for machine to come up
	I0520 12:38:10.334414  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:10.334773  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find current IP address of domain ha-252263-m02 in network mk-ha-252263
	I0520 12:38:10.334805  874942 main.go:141] libmachine: (ha-252263-m02) DBG | I0520 12:38:10.334757  875352 retry.go:31] will retry after 4.302575583s: waiting for machine to come up
	I0520 12:38:14.639801  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:14.640153  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has current primary IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:14.640188  874942 main.go:141] libmachine: (ha-252263-m02) Found IP for machine: 192.168.39.22
	I0520 12:38:14.640245  874942 main.go:141] libmachine: (ha-252263-m02) Reserving static IP address...
	I0520 12:38:14.640554  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find host DHCP lease matching {name: "ha-252263-m02", mac: "52:54:00:f8:3d:6b", ip: "192.168.39.22"} in network mk-ha-252263
	I0520 12:38:14.712950  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Getting to WaitForSSH function...
	I0520 12:38:14.712995  874942 main.go:141] libmachine: (ha-252263-m02) Reserved static IP address: 192.168.39.22
	I0520 12:38:14.713044  874942 main.go:141] libmachine: (ha-252263-m02) Waiting for SSH to be available...
	I0520 12:38:14.715636  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:14.715942  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263
	I0520 12:38:14.715971  874942 main.go:141] libmachine: (ha-252263-m02) DBG | unable to find defined IP address of network mk-ha-252263 interface with MAC address 52:54:00:f8:3d:6b
	I0520 12:38:14.716148  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using SSH client type: external
	I0520 12:38:14.716174  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa (-rw-------)
	I0520 12:38:14.716206  874942 main.go:141] libmachine: (ha-252263-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:38:14.716221  874942 main.go:141] libmachine: (ha-252263-m02) DBG | About to run SSH command:
	I0520 12:38:14.716240  874942 main.go:141] libmachine: (ha-252263-m02) DBG | exit 0
	I0520 12:38:14.719748  874942 main.go:141] libmachine: (ha-252263-m02) DBG | SSH cmd err, output: exit status 255: 
	I0520 12:38:14.719768  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 12:38:14.719775  874942 main.go:141] libmachine: (ha-252263-m02) DBG | command : exit 0
	I0520 12:38:14.719792  874942 main.go:141] libmachine: (ha-252263-m02) DBG | err     : exit status 255
	I0520 12:38:14.719808  874942 main.go:141] libmachine: (ha-252263-m02) DBG | output  : 
	I0520 12:38:17.720763  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Getting to WaitForSSH function...
	I0520 12:38:17.723007  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.723453  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:17.723492  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.723591  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using SSH client type: external
	I0520 12:38:17.723614  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa (-rw-------)
	I0520 12:38:17.723641  874942 main.go:141] libmachine: (ha-252263-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:38:17.723661  874942 main.go:141] libmachine: (ha-252263-m02) DBG | About to run SSH command:
	I0520 12:38:17.723681  874942 main.go:141] libmachine: (ha-252263-m02) DBG | exit 0
	I0520 12:38:17.851148  874942 main.go:141] libmachine: (ha-252263-m02) DBG | SSH cmd err, output: <nil>: 
	I0520 12:38:17.851442  874942 main.go:141] libmachine: (ha-252263-m02) KVM machine creation complete!
	I0520 12:38:17.851752  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetConfigRaw
	I0520 12:38:17.852318  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:17.852584  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:17.852759  874942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:38:17.852778  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:38:17.854013  874942 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:38:17.854029  874942 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:38:17.854035  874942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:38:17.854041  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:17.856077  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.856418  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:17.856447  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.856606  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:17.856825  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:17.856987  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:17.857132  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:17.857297  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:17.857501  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:17.857511  874942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:38:17.966235  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:38:17.966263  874942 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:38:17.966274  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:17.968639  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.968970  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:17.969001  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:17.969123  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:17.969315  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:17.969472  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:17.969623  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:17.969813  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:17.970030  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:17.970044  874942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:38:18.083552  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:38:18.083627  874942 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:38:18.083636  874942 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:38:18.083645  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetMachineName
	I0520 12:38:18.083940  874942 buildroot.go:166] provisioning hostname "ha-252263-m02"
	I0520 12:38:18.083972  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetMachineName
	I0520 12:38:18.084172  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.087080  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.087485  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.087510  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.087644  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.087831  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.088009  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.088189  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.088342  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:18.088519  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:18.088535  874942 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-252263-m02 && echo "ha-252263-m02" | sudo tee /etc/hostname
	I0520 12:38:18.211635  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263-m02
	
	I0520 12:38:18.211668  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.214782  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.215150  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.215178  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.215379  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.215590  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.215775  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.215943  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.216127  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:18.216294  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:18.216311  874942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-252263-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-252263-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-252263-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:38:18.332285  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:38:18.332319  874942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 12:38:18.332341  874942 buildroot.go:174] setting up certificates
	I0520 12:38:18.332361  874942 provision.go:84] configureAuth start
	I0520 12:38:18.332376  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetMachineName
	I0520 12:38:18.332703  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:38:18.335191  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.335530  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.335558  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.335676  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.337556  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.337857  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.337888  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.338030  874942 provision.go:143] copyHostCerts
	I0520 12:38:18.338068  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:38:18.338109  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 12:38:18.338122  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:38:18.338199  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 12:38:18.338333  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:38:18.338363  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 12:38:18.338374  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:38:18.338416  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 12:38:18.338483  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:38:18.338506  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 12:38:18.338514  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:38:18.338541  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 12:38:18.338610  874942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.ha-252263-m02 san=[127.0.0.1 192.168.39.22 ha-252263-m02 localhost minikube]
	I0520 12:38:18.401827  874942 provision.go:177] copyRemoteCerts
	I0520 12:38:18.401892  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:38:18.401921  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.404423  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.404727  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.404747  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.405074  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.405337  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.405507  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.405673  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:38:18.489155  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 12:38:18.489248  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:38:18.513816  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 12:38:18.513892  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 12:38:18.537782  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 12:38:18.537857  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 12:38:18.562319  874942 provision.go:87] duration metric: took 229.942119ms to configureAuth
	I0520 12:38:18.562351  874942 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:38:18.562567  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:38:18.562662  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.565464  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.565905  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.565942  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.566123  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.566451  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.566669  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.566842  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.567056  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:18.567268  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:18.567291  874942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:38:18.827916  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:38:18.827949  874942 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:38:18.827960  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetURL
	I0520 12:38:18.829240  874942 main.go:141] libmachine: (ha-252263-m02) DBG | Using libvirt version 6000000
	I0520 12:38:18.831406  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.831794  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.831823  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.831962  874942 main.go:141] libmachine: Docker is up and running!
	I0520 12:38:18.831976  874942 main.go:141] libmachine: Reticulating splines...
	I0520 12:38:18.831984  874942 client.go:171] duration metric: took 26.447954823s to LocalClient.Create
	I0520 12:38:18.832006  874942 start.go:167] duration metric: took 26.448010511s to libmachine.API.Create "ha-252263"
	I0520 12:38:18.832016  874942 start.go:293] postStartSetup for "ha-252263-m02" (driver="kvm2")
	I0520 12:38:18.832026  874942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:38:18.832043  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:18.832297  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:38:18.832328  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.834658  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.835010  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.835051  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.835160  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.835368  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.835507  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.835750  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:38:18.921789  874942 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:38:18.926130  874942 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:38:18.926160  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 12:38:18.926229  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 12:38:18.926308  874942 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 12:38:18.926319  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 12:38:18.926401  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:38:18.936277  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:38:18.959633  874942 start.go:296] duration metric: took 127.60085ms for postStartSetup
	I0520 12:38:18.959689  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetConfigRaw
	I0520 12:38:18.960282  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:38:18.963033  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.963353  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.963376  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.963606  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:38:18.963783  874942 start.go:128] duration metric: took 26.597693013s to createHost
	I0520 12:38:18.963808  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:18.966087  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.966481  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:18.966517  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:18.966671  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:18.966915  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.967077  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:18.967209  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:18.967430  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:38:18.967598  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0520 12:38:18.967608  874942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:38:19.075872  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716208699.055192413
	
	I0520 12:38:19.075899  874942 fix.go:216] guest clock: 1716208699.055192413
	I0520 12:38:19.075906  874942 fix.go:229] Guest: 2024-05-20 12:38:19.055192413 +0000 UTC Remote: 2024-05-20 12:38:18.963794268 +0000 UTC m=+83.475577267 (delta=91.398145ms)
	I0520 12:38:19.075922  874942 fix.go:200] guest clock delta is within tolerance: 91.398145ms
	I0520 12:38:19.075927  874942 start.go:83] releasing machines lock for "ha-252263-m02", held for 26.709919409s
	I0520 12:38:19.075945  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:19.076209  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:38:19.079701  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.080070  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:19.080096  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.082634  874942 out.go:177] * Found network options:
	I0520 12:38:19.084160  874942 out.go:177]   - NO_PROXY=192.168.39.182
	W0520 12:38:19.085403  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 12:38:19.085449  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:19.085975  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:19.086157  874942 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:38:19.086257  874942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:38:19.086310  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	W0520 12:38:19.086320  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 12:38:19.086394  874942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:38:19.086418  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:38:19.088785  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.089158  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.089189  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:19.089213  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.089391  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:19.089590  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:19.089655  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:19.089678  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:19.089784  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:19.089864  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:38:19.090033  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:38:19.090411  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:38:19.090572  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:38:19.090749  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:38:19.324093  874942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:38:19.331022  874942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:38:19.331094  874942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:38:19.347892  874942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:38:19.347911  874942 start.go:494] detecting cgroup driver to use...
	I0520 12:38:19.347980  874942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:38:19.364955  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:38:19.379483  874942 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:38:19.379530  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:38:19.392802  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:38:19.405888  874942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:38:19.514514  874942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:38:19.692620  874942 docker.go:233] disabling docker service ...
	I0520 12:38:19.692698  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:38:19.707446  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:38:19.721687  874942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:38:19.838194  874942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:38:19.949936  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:38:19.964631  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:38:19.983818  874942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:38:19.983889  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:19.994815  874942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:38:19.994894  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.005752  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.016982  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.035035  874942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:38:20.046485  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.056549  874942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.073191  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:38:20.083150  874942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:38:20.092175  874942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:38:20.092230  874942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:38:20.104850  874942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:38:20.114172  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:38:20.230940  874942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:38:20.369577  874942 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:38:20.369648  874942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:38:20.374381  874942 start.go:562] Will wait 60s for crictl version
	I0520 12:38:20.374441  874942 ssh_runner.go:195] Run: which crictl
	I0520 12:38:20.378268  874942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:38:20.420213  874942 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:38:20.420283  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:38:20.447229  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:38:20.475802  874942 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:38:20.477391  874942 out.go:177]   - env NO_PROXY=192.168.39.182
	I0520 12:38:20.478647  874942 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:38:20.481074  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:20.481427  874942 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:38:06 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:38:20.481458  874942 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:38:20.481619  874942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:38:20.485598  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:38:20.498327  874942 mustload.go:65] Loading cluster: ha-252263
	I0520 12:38:20.498517  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:38:20.498773  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:38:20.498801  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:38:20.513186  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0520 12:38:20.513621  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:38:20.514113  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:38:20.514133  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:38:20.514454  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:38:20.514641  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:38:20.516315  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:38:20.516605  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:38:20.516630  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:38:20.530533  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34165
	I0520 12:38:20.530957  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:38:20.531387  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:38:20.531408  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:38:20.531750  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:38:20.531901  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:38:20.532079  874942 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263 for IP: 192.168.39.22
	I0520 12:38:20.532092  874942 certs.go:194] generating shared ca certs ...
	I0520 12:38:20.532106  874942 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:38:20.532226  874942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 12:38:20.532269  874942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 12:38:20.532283  874942 certs.go:256] generating profile certs ...
	I0520 12:38:20.532357  874942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key
	I0520 12:38:20.532383  874942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.da923b66
	I0520 12:38:20.532397  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.da923b66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182 192.168.39.22 192.168.39.254]
	I0520 12:38:20.704724  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.da923b66 ...
	I0520 12:38:20.704764  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.da923b66: {Name:mk90854b85c58258865cd7915fa91b5b8292a209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:38:20.704946  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.da923b66 ...
	I0520 12:38:20.704968  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.da923b66: {Name:mk4f87701cc78eff0286b15f5fc1624a9aabe73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:38:20.705066  874942 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.da923b66 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt
	I0520 12:38:20.705205  874942 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.da923b66 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key
	I0520 12:38:20.705379  874942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key
	I0520 12:38:20.705399  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 12:38:20.705415  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 12:38:20.705425  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 12:38:20.705435  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 12:38:20.705448  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 12:38:20.705468  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 12:38:20.705484  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 12:38:20.705500  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 12:38:20.705560  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 12:38:20.705602  874942 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 12:38:20.705615  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 12:38:20.705648  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 12:38:20.705680  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:38:20.705713  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 12:38:20.705768  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:38:20.705807  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:38:20.705830  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 12:38:20.705848  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 12:38:20.705890  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:38:20.709247  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:38:20.709595  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:38:20.709627  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:38:20.709770  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:38:20.710174  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:38:20.710385  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:38:20.710573  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:38:20.783208  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 12:38:20.789780  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 12:38:20.800913  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 12:38:20.805394  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0520 12:38:20.816232  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 12:38:20.821031  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 12:38:20.831501  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 12:38:20.836361  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 12:38:20.846199  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 12:38:20.850364  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 12:38:20.860304  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 12:38:20.864515  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0520 12:38:20.875902  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:38:20.901107  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:38:20.924273  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:38:20.946974  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 12:38:20.969264  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 12:38:20.991963  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:38:21.014098  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:38:21.036723  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:38:21.061090  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:38:21.085430  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 12:38:21.109466  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 12:38:21.131873  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 12:38:21.147577  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0520 12:38:21.162873  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 12:38:21.178418  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 12:38:21.193875  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 12:38:21.209750  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0520 12:38:21.225684  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 12:38:21.241595  874942 ssh_runner.go:195] Run: openssl version
	I0520 12:38:21.247131  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:38:21.257262  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:38:21.261466  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:38:21.261521  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:38:21.266938  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:38:21.277544  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 12:38:21.287930  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 12:38:21.292187  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 12:38:21.292230  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 12:38:21.297613  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 12:38:21.307788  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 12:38:21.319198  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 12:38:21.323610  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 12:38:21.323660  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 12:38:21.329016  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 12:38:21.339656  874942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:38:21.343537  874942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:38:21.343585  874942 kubeadm.go:928] updating node {m02 192.168.39.22 8443 v1.30.1 crio true true} ...
	I0520 12:38:21.343664  874942 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-252263-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:38:21.343691  874942 kube-vip.go:115] generating kube-vip config ...
	I0520 12:38:21.343718  874942 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 12:38:21.359494  874942 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 12:38:21.359573  874942 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 12:38:21.359630  874942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:38:21.369004  874942 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 12:38:21.369070  874942 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 12:38:21.378284  874942 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 12:38:21.378309  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 12:38:21.378347  874942 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0520 12:38:21.378377  874942 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0520 12:38:21.378383  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 12:38:21.382550  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 12:38:21.382590  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 12:38:21.923517  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 12:38:21.923591  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 12:38:21.929603  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 12:38:21.929635  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 12:38:22.258396  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:38:22.272569  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 12:38:22.272682  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 12:38:22.277271  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 12:38:22.277302  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 12:38:22.687133  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 12:38:22.696541  874942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 12:38:22.713190  874942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:38:22.729330  874942 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 12:38:22.745861  874942 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 12:38:22.749718  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:38:22.762163  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:38:22.888499  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:38:22.905439  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:38:22.905827  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:38:22.905871  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:38:22.925249  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0520 12:38:22.925768  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:38:22.926297  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:38:22.926329  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:38:22.926648  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:38:22.926855  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:38:22.926978  874942 start.go:316] joinCluster: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:38:22.927128  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 12:38:22.927154  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:38:22.930191  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:38:22.930643  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:38:22.930670  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:38:22.931099  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:38:22.931318  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:38:22.931473  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:38:22.931610  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:38:23.081518  874942 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:38:23.081581  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5hi4iu.txjljiqwqlue37gn --discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-252263-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443"
	I0520 12:38:45.161655  874942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5hi4iu.txjljiqwqlue37gn --discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-252263-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443": (22.080039262s)
	I0520 12:38:45.161700  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 12:38:45.717850  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-252263-m02 minikube.k8s.io/updated_at=2024_05_20T12_38_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=ha-252263 minikube.k8s.io/primary=false
	I0520 12:38:45.818374  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-252263-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 12:38:45.972007  874942 start.go:318] duration metric: took 23.045022352s to joinCluster
	I0520 12:38:45.972097  874942 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:38:45.973575  874942 out.go:177] * Verifying Kubernetes components...
	I0520 12:38:45.972387  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:38:45.975129  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:38:46.205640  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:38:46.226306  874942 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:38:46.226517  874942 kapi.go:59] client config for ha-252263: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt", KeyFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key", CAFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 12:38:46.226577  874942 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.182:8443
	I0520 12:38:46.226802  874942 node_ready.go:35] waiting up to 6m0s for node "ha-252263-m02" to be "Ready" ...
	I0520 12:38:46.226943  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:46.226949  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:46.226957  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:46.226961  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:46.245572  874942 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0520 12:38:46.727739  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:46.727763  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:46.727776  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:46.727782  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:46.732127  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:47.228051  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:47.228076  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:47.228092  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:47.228097  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:47.231911  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:47.727055  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:47.727084  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:47.727095  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:47.727102  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:47.735122  874942 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 12:38:48.227827  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:48.227848  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:48.227855  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:48.227859  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:48.233298  874942 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 12:38:48.234147  874942 node_ready.go:53] node "ha-252263-m02" has status "Ready":"False"
	I0520 12:38:48.727051  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:48.727080  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:48.727089  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:48.727094  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:48.731160  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:49.227153  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:49.227178  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:49.227188  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:49.227194  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:49.230118  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:49.727080  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:49.727106  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:49.727115  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:49.727117  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:49.730235  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.227040  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:50.227067  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.227075  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.227079  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.230058  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.727562  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:50.727586  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.727597  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.727604  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.731317  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.732048  874942 node_ready.go:49] node "ha-252263-m02" has status "Ready":"True"
	I0520 12:38:50.732073  874942 node_ready.go:38] duration metric: took 4.505226722s for node "ha-252263-m02" to be "Ready" ...
	I0520 12:38:50.732084  874942 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:38:50.732190  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:38:50.732202  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.732213  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.732220  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.738362  874942 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 12:38:50.745180  874942 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.745277  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-96h5w
	I0520 12:38:50.745287  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.745298  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.745303  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.748060  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.751342  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:38:50.751363  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.751373  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.751380  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.754720  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.755641  874942 pod_ready.go:92] pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace has status "Ready":"True"
	I0520 12:38:50.755668  874942 pod_ready.go:81] duration metric: took 10.464929ms for pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.755680  874942 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.755746  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c2vkj
	I0520 12:38:50.755756  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.755765  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.755774  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.758425  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.759133  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:38:50.759150  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.759157  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.759162  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.761960  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.762415  874942 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace has status "Ready":"True"
	I0520 12:38:50.762432  874942 pod_ready.go:81] duration metric: took 6.745564ms for pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.762439  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.762484  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263
	I0520 12:38:50.762492  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.762501  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.762511  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.765276  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:50.765815  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:38:50.765831  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.765841  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.765846  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.769196  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.769588  874942 pod_ready.go:92] pod "etcd-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:38:50.769603  874942 pod_ready.go:81] duration metric: took 7.157596ms for pod "etcd-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.769610  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:38:50.769660  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:50.769670  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.769677  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.769680  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.773058  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:50.773649  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:50.773669  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:50.773680  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:50.773686  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:50.775958  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:51.269875  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:51.269905  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:51.269918  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:51.269924  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:51.273355  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:51.273947  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:51.273961  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:51.273969  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:51.273973  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:51.277730  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:51.770038  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:51.770062  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:51.770071  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:51.770076  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:51.773480  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:51.774205  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:51.774220  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:51.774229  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:51.774238  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:51.776847  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:52.269838  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:52.269868  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:52.269878  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:52.269882  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:52.272746  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:52.273346  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:52.273360  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:52.273368  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:52.273372  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:52.276545  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:52.770638  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:52.770660  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:52.770668  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:52.770672  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:52.775017  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:52.776237  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:52.776253  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:52.776260  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:52.776265  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:52.780136  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:52.780922  874942 pod_ready.go:102] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 12:38:53.270837  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:53.270882  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:53.270893  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:53.270899  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:53.274064  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:53.274813  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:53.274830  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:53.274838  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:53.274864  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:53.277652  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:53.770574  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:53.770600  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:53.770609  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:53.770612  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:53.773951  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:53.774812  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:53.774829  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:53.774836  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:53.774840  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:53.777582  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:54.270471  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:54.270498  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:54.270506  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:54.270511  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:54.274188  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:54.275072  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:54.275090  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:54.275098  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:54.275103  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:54.277898  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:54.769908  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:54.769932  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:54.769940  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:54.769943  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:54.773719  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:54.774388  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:54.774409  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:54.774418  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:54.774422  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:54.777209  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:55.270531  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:55.270561  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:55.270572  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:55.270578  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:55.274787  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:55.276186  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:55.276207  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:55.276218  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:55.276226  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:55.278900  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:55.279558  874942 pod_ready.go:102] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 12:38:55.770839  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:55.770878  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:55.770887  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:55.770919  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:55.774406  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:55.775128  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:55.775144  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:55.775152  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:55.775156  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:55.778049  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:56.270043  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:56.270066  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:56.270074  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:56.270080  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:56.273102  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:56.274105  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:56.274124  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:56.274136  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:56.274141  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:56.276748  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:56.770724  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:56.770760  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:56.770774  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:56.770781  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:56.774312  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:56.775245  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:56.775262  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:56.775269  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:56.775272  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:56.777640  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:57.270518  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:57.270541  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:57.270547  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:57.270551  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:57.274530  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:57.275524  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:57.275538  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:57.275545  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:57.275549  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:57.278190  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:57.770607  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:57.770631  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:57.770639  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:57.770643  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:57.774885  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:57.775642  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:57.775655  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:57.775669  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:57.775674  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:57.778361  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:57.778943  874942 pod_ready.go:102] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 12:38:58.269827  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:58.269850  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:58.269858  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:58.269861  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:58.273236  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:58.273879  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:58.273890  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:58.273898  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:58.273902  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:58.277307  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:58.770146  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:58.770172  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:58.770177  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:58.770181  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:58.773644  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:58.774773  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:58.774791  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:58.774802  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:58.774806  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:58.777474  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:59.270715  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:59.270740  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:59.270752  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:59.270760  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:59.274082  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:38:59.274756  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:59.274775  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:59.274783  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:59.274787  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:59.277129  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:38:59.769885  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:38:59.769908  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:59.769916  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:59.769920  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:59.774084  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:38:59.774920  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:38:59.774935  874942 round_trippers.go:469] Request Headers:
	I0520 12:38:59.774944  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:38:59.774951  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:38:59.777504  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:00.270554  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:39:00.270576  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:00.270584  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:00.270588  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:00.273581  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:00.274145  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:00.274161  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:00.274169  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:00.274174  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:00.276507  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:00.277061  874942 pod_ready.go:102] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 12:39:00.769869  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:39:00.769895  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:00.769903  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:00.769906  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:00.773404  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:00.774101  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:00.774118  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:00.774126  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:00.774131  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:00.776849  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.269841  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:39:01.269871  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.269881  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.269893  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.274092  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:39:01.274741  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:01.274760  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.274768  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.274773  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.277070  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.770409  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:39:01.770434  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.770442  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.770445  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.773398  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.774207  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:01.774222  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.774231  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.774236  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.777238  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.777824  874942 pod_ready.go:92] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.777843  874942 pod_ready.go:81] duration metric: took 11.008226852s for pod "etcd-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.777858  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.777919  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263
	I0520 12:39:01.777926  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.777933  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.777937  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.782842  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:39:01.783669  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:01.783686  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.783696  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.783702  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.794447  874942 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 12:39:01.795140  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.795175  874942 pod_ready.go:81] duration metric: took 17.306245ms for pod "kube-apiserver-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.795191  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.795284  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263-m02
	I0520 12:39:01.795298  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.795308  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.795313  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.819716  874942 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0520 12:39:01.820481  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:01.820499  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.820508  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.820514  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.823486  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.823930  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.823951  874942 pod_ready.go:81] duration metric: took 28.750691ms for pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.823965  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.824051  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263
	I0520 12:39:01.824065  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.824075  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.824082  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.830240  874942 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 12:39:01.830951  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:01.830965  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.830973  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.830976  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.833514  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.834465  874942 pod_ready.go:92] pod "kube-controller-manager-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.834488  874942 pod_ready.go:81] duration metric: took 10.500265ms for pod "kube-controller-manager-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.834500  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-84x7f" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.834568  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84x7f
	I0520 12:39:01.834579  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.834589  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.834593  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.837125  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.837755  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:39:01.837767  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.837774  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.837779  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.840077  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:01.840524  874942 pod_ready.go:92] pod "kube-proxy-84x7f" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:01.840545  874942 pod_ready.go:81] duration metric: took 6.036863ms for pod "kube-proxy-84x7f" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.840557  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5zvt" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:01.970923  874942 request.go:629] Waited for 130.282489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5zvt
	I0520 12:39:01.970980  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5zvt
	I0520 12:39:01.970985  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:01.970992  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:01.970996  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:01.973934  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:39:02.170878  874942 request.go:629] Waited for 196.369487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:02.170941  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:02.170946  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.170959  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.170964  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.174369  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:02.174917  874942 pod_ready.go:92] pod "kube-proxy-z5zvt" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:02.174936  874942 pod_ready.go:81] duration metric: took 334.371338ms for pod "kube-proxy-z5zvt" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:02.174946  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:02.371023  874942 request.go:629] Waited for 195.999349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263
	I0520 12:39:02.371093  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263
	I0520 12:39:02.371098  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.371105  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.371109  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.374674  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:02.570602  874942 request.go:629] Waited for 195.28291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:02.570682  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:39:02.570690  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.570701  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.570710  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.574742  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:02.575385  874942 pod_ready.go:92] pod "kube-scheduler-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:39:02.575403  874942 pod_ready.go:81] duration metric: took 400.451085ms for pod "kube-scheduler-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:39:02.575415  874942 pod_ready.go:38] duration metric: took 11.843285919s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:39:02.575439  874942 api_server.go:52] waiting for apiserver process to appear ...
	I0520 12:39:02.575500  874942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:39:02.592758  874942 api_server.go:72] duration metric: took 16.620622173s to wait for apiserver process to appear ...
	I0520 12:39:02.592782  874942 api_server.go:88] waiting for apiserver healthz status ...
	I0520 12:39:02.592802  874942 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0520 12:39:02.597082  874942 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0520 12:39:02.597158  874942 round_trippers.go:463] GET https://192.168.39.182:8443/version
	I0520 12:39:02.597168  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.597176  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.597181  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.597994  874942 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0520 12:39:02.598158  874942 api_server.go:141] control plane version: v1.30.1
	I0520 12:39:02.598190  874942 api_server.go:131] duration metric: took 5.399467ms to wait for apiserver health ...
	I0520 12:39:02.598200  874942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 12:39:02.770583  874942 request.go:629] Waited for 172.286764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:39:02.770652  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:39:02.770657  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.770665  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.770669  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.776316  874942 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 12:39:02.781078  874942 system_pods.go:59] 17 kube-system pods found
	I0520 12:39:02.781103  874942 system_pods.go:61] "coredns-7db6d8ff4d-96h5w" [3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf] Running
	I0520 12:39:02.781108  874942 system_pods.go:61] "coredns-7db6d8ff4d-c2vkj" [a5fa83f0-abaa-4c78-8d08-124503934fb1] Running
	I0520 12:39:02.781111  874942 system_pods.go:61] "etcd-ha-252263" [d5d0140d-3bf7-4b3f-9a11-b275e9800f1d] Running
	I0520 12:39:02.781114  874942 system_pods.go:61] "etcd-ha-252263-m02" [1a626412-42d2-478b-9ebf-891abf9e9a5a] Running
	I0520 12:39:02.781117  874942 system_pods.go:61] "kindnet-8vkjc" [b222e7ad-6005-42bf-867f-40b94d584782] Running
	I0520 12:39:02.781119  874942 system_pods.go:61] "kindnet-lfz72" [dcfb2815-bac5-46fd-b65e-6fa4cbc748be] Running
	I0520 12:39:02.781122  874942 system_pods.go:61] "kube-apiserver-ha-252263" [69e7f726-e571-41dd-a16e-10f4b495d230] Running
	I0520 12:39:02.781124  874942 system_pods.go:61] "kube-apiserver-ha-252263-m02" [6cecadf0-4518-4744-aa2b-81a27c1cfb0d] Running
	I0520 12:39:02.781127  874942 system_pods.go:61] "kube-controller-manager-ha-252263" [51976a74-4436-45cc-9192-6d0af34f30b0] Running
	I0520 12:39:02.781130  874942 system_pods.go:61] "kube-controller-manager-ha-252263-m02" [72556438-654e-4070-ad00-d3e737db68dd] Running
	I0520 12:39:02.781133  874942 system_pods.go:61] "kube-proxy-84x7f" [af9df182-185d-479e-abf7-7bcb3709d039] Running
	I0520 12:39:02.781136  874942 system_pods.go:61] "kube-proxy-z5zvt" [fd9f5f1f-60ac-4567-8d5c-b2de0404623f] Running
	I0520 12:39:02.781138  874942 system_pods.go:61] "kube-scheduler-ha-252263" [a6b8dabc-a8a1-46b3-ae41-ecb026648fe3] Running
	I0520 12:39:02.781141  874942 system_pods.go:61] "kube-scheduler-ha-252263-m02" [bafebb09-b0c8-481f-8808-d4396c2b28cb] Running
	I0520 12:39:02.781144  874942 system_pods.go:61] "kube-vip-ha-252263" [6e5827b4-5a1c-4523-9282-8c901ab68b5a] Running
	I0520 12:39:02.781147  874942 system_pods.go:61] "kube-vip-ha-252263-m02" [d33ac9fa-d81e-4676-a735-76f6709c3695] Running
	I0520 12:39:02.781149  874942 system_pods.go:61] "storage-provisioner" [5db18dbf-710f-4c10-84bb-c5120c865740] Running
	I0520 12:39:02.781157  874942 system_pods.go:74] duration metric: took 182.947275ms to wait for pod list to return data ...
	I0520 12:39:02.781168  874942 default_sa.go:34] waiting for default service account to be created ...
	I0520 12:39:02.970703  874942 request.go:629] Waited for 189.443135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/default/serviceaccounts
	I0520 12:39:02.970763  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/default/serviceaccounts
	I0520 12:39:02.970767  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:02.970785  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:02.970798  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:02.974258  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:02.974483  874942 default_sa.go:45] found service account: "default"
	I0520 12:39:02.974499  874942 default_sa.go:55] duration metric: took 193.324555ms for default service account to be created ...
	I0520 12:39:02.974507  874942 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 12:39:03.170944  874942 request.go:629] Waited for 196.359277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:39:03.171057  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:39:03.171070  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:03.171079  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:03.171086  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:03.176098  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:39:03.180564  874942 system_pods.go:86] 17 kube-system pods found
	I0520 12:39:03.180588  874942 system_pods.go:89] "coredns-7db6d8ff4d-96h5w" [3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf] Running
	I0520 12:39:03.180593  874942 system_pods.go:89] "coredns-7db6d8ff4d-c2vkj" [a5fa83f0-abaa-4c78-8d08-124503934fb1] Running
	I0520 12:39:03.180598  874942 system_pods.go:89] "etcd-ha-252263" [d5d0140d-3bf7-4b3f-9a11-b275e9800f1d] Running
	I0520 12:39:03.180602  874942 system_pods.go:89] "etcd-ha-252263-m02" [1a626412-42d2-478b-9ebf-891abf9e9a5a] Running
	I0520 12:39:03.180605  874942 system_pods.go:89] "kindnet-8vkjc" [b222e7ad-6005-42bf-867f-40b94d584782] Running
	I0520 12:39:03.180609  874942 system_pods.go:89] "kindnet-lfz72" [dcfb2815-bac5-46fd-b65e-6fa4cbc748be] Running
	I0520 12:39:03.180615  874942 system_pods.go:89] "kube-apiserver-ha-252263" [69e7f726-e571-41dd-a16e-10f4b495d230] Running
	I0520 12:39:03.180621  874942 system_pods.go:89] "kube-apiserver-ha-252263-m02" [6cecadf0-4518-4744-aa2b-81a27c1cfb0d] Running
	I0520 12:39:03.180631  874942 system_pods.go:89] "kube-controller-manager-ha-252263" [51976a74-4436-45cc-9192-6d0af34f30b0] Running
	I0520 12:39:03.180643  874942 system_pods.go:89] "kube-controller-manager-ha-252263-m02" [72556438-654e-4070-ad00-d3e737db68dd] Running
	I0520 12:39:03.180652  874942 system_pods.go:89] "kube-proxy-84x7f" [af9df182-185d-479e-abf7-7bcb3709d039] Running
	I0520 12:39:03.180661  874942 system_pods.go:89] "kube-proxy-z5zvt" [fd9f5f1f-60ac-4567-8d5c-b2de0404623f] Running
	I0520 12:39:03.180667  874942 system_pods.go:89] "kube-scheduler-ha-252263" [a6b8dabc-a8a1-46b3-ae41-ecb026648fe3] Running
	I0520 12:39:03.180674  874942 system_pods.go:89] "kube-scheduler-ha-252263-m02" [bafebb09-b0c8-481f-8808-d4396c2b28cb] Running
	I0520 12:39:03.180678  874942 system_pods.go:89] "kube-vip-ha-252263" [6e5827b4-5a1c-4523-9282-8c901ab68b5a] Running
	I0520 12:39:03.180684  874942 system_pods.go:89] "kube-vip-ha-252263-m02" [d33ac9fa-d81e-4676-a735-76f6709c3695] Running
	I0520 12:39:03.180690  874942 system_pods.go:89] "storage-provisioner" [5db18dbf-710f-4c10-84bb-c5120c865740] Running
	I0520 12:39:03.180698  874942 system_pods.go:126] duration metric: took 206.18632ms to wait for k8s-apps to be running ...
	I0520 12:39:03.180706  874942 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 12:39:03.180763  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:39:03.196187  874942 system_svc.go:56] duration metric: took 15.474523ms WaitForService to wait for kubelet
	I0520 12:39:03.196214  874942 kubeadm.go:576] duration metric: took 17.224081773s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:39:03.196232  874942 node_conditions.go:102] verifying NodePressure condition ...
	I0520 12:39:03.370582  874942 request.go:629] Waited for 174.273669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes
	I0520 12:39:03.370675  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes
	I0520 12:39:03.370686  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:03.370697  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:03.370704  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:03.374520  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:39:03.375386  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:39:03.375427  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:39:03.375446  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:39:03.375451  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:39:03.375457  874942 node_conditions.go:105] duration metric: took 179.220453ms to run NodePressure ...
	I0520 12:39:03.375473  874942 start.go:240] waiting for startup goroutines ...
	I0520 12:39:03.375516  874942 start.go:254] writing updated cluster config ...
	I0520 12:39:03.380755  874942 out.go:177] 
	I0520 12:39:03.382323  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:39:03.382431  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:39:03.384034  874942 out.go:177] * Starting "ha-252263-m03" control-plane node in "ha-252263" cluster
	I0520 12:39:03.385228  874942 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:39:03.385251  874942 cache.go:56] Caching tarball of preloaded images
	I0520 12:39:03.385362  874942 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:39:03.385375  874942 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:39:03.385480  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:39:03.385652  874942 start.go:360] acquireMachinesLock for ha-252263-m03: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:39:03.385711  874942 start.go:364] duration metric: took 33.926µs to acquireMachinesLock for "ha-252263-m03"
	I0520 12:39:03.385736  874942 start.go:93] Provisioning new machine with config: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:39:03.385844  874942 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0520 12:39:03.387315  874942 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 12:39:03.387412  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:39:03.387455  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:39:03.403199  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0520 12:39:03.403675  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:39:03.404220  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:39:03.404240  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:39:03.404581  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:39:03.404800  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetMachineName
	I0520 12:39:03.404971  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:03.405139  874942 start.go:159] libmachine.API.Create for "ha-252263" (driver="kvm2")
	I0520 12:39:03.405162  874942 client.go:168] LocalClient.Create starting
	I0520 12:39:03.405188  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 12:39:03.405219  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:39:03.405235  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:39:03.405286  874942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 12:39:03.405304  874942 main.go:141] libmachine: Decoding PEM data...
	I0520 12:39:03.405313  874942 main.go:141] libmachine: Parsing certificate...
	I0520 12:39:03.405328  874942 main.go:141] libmachine: Running pre-create checks...
	I0520 12:39:03.405335  874942 main.go:141] libmachine: (ha-252263-m03) Calling .PreCreateCheck
	I0520 12:39:03.405544  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetConfigRaw
	I0520 12:39:03.405904  874942 main.go:141] libmachine: Creating machine...
	I0520 12:39:03.405917  874942 main.go:141] libmachine: (ha-252263-m03) Calling .Create
	I0520 12:39:03.406065  874942 main.go:141] libmachine: (ha-252263-m03) Creating KVM machine...
	I0520 12:39:03.407281  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found existing default KVM network
	I0520 12:39:03.407402  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found existing private KVM network mk-ha-252263
	I0520 12:39:03.407509  874942 main.go:141] libmachine: (ha-252263-m03) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03 ...
	I0520 12:39:03.407545  874942 main.go:141] libmachine: (ha-252263-m03) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:39:03.407598  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:03.407496  875716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:39:03.407683  874942 main.go:141] libmachine: (ha-252263-m03) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:39:03.671079  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:03.670953  875716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa...
	I0520 12:39:03.886224  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:03.886075  875716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/ha-252263-m03.rawdisk...
	I0520 12:39:03.886268  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Writing magic tar header
	I0520 12:39:03.886284  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Writing SSH key tar header
	I0520 12:39:03.886302  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:03.886229  875716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03 ...
	I0520 12:39:03.886399  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03
	I0520 12:39:03.886431  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 12:39:03.886445  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03 (perms=drwx------)
	I0520 12:39:03.886464  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:39:03.886474  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 12:39:03.886480  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:39:03.886491  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 12:39:03.886497  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:39:03.886506  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 12:39:03.886512  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:39:03.886525  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:39:03.886538  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Checking permissions on dir: /home
	I0520 12:39:03.886553  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Skipping /home - not owner
	I0520 12:39:03.886567  874942 main.go:141] libmachine: (ha-252263-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:39:03.886579  874942 main.go:141] libmachine: (ha-252263-m03) Creating domain...
	I0520 12:39:03.887530  874942 main.go:141] libmachine: (ha-252263-m03) define libvirt domain using xml: 
	I0520 12:39:03.887554  874942 main.go:141] libmachine: (ha-252263-m03) <domain type='kvm'>
	I0520 12:39:03.887564  874942 main.go:141] libmachine: (ha-252263-m03)   <name>ha-252263-m03</name>
	I0520 12:39:03.887571  874942 main.go:141] libmachine: (ha-252263-m03)   <memory unit='MiB'>2200</memory>
	I0520 12:39:03.887581  874942 main.go:141] libmachine: (ha-252263-m03)   <vcpu>2</vcpu>
	I0520 12:39:03.887592  874942 main.go:141] libmachine: (ha-252263-m03)   <features>
	I0520 12:39:03.887603  874942 main.go:141] libmachine: (ha-252263-m03)     <acpi/>
	I0520 12:39:03.887613  874942 main.go:141] libmachine: (ha-252263-m03)     <apic/>
	I0520 12:39:03.887631  874942 main.go:141] libmachine: (ha-252263-m03)     <pae/>
	I0520 12:39:03.887642  874942 main.go:141] libmachine: (ha-252263-m03)     
	I0520 12:39:03.887654  874942 main.go:141] libmachine: (ha-252263-m03)   </features>
	I0520 12:39:03.887666  874942 main.go:141] libmachine: (ha-252263-m03)   <cpu mode='host-passthrough'>
	I0520 12:39:03.887675  874942 main.go:141] libmachine: (ha-252263-m03)   
	I0520 12:39:03.887686  874942 main.go:141] libmachine: (ha-252263-m03)   </cpu>
	I0520 12:39:03.887694  874942 main.go:141] libmachine: (ha-252263-m03)   <os>
	I0520 12:39:03.887722  874942 main.go:141] libmachine: (ha-252263-m03)     <type>hvm</type>
	I0520 12:39:03.887735  874942 main.go:141] libmachine: (ha-252263-m03)     <boot dev='cdrom'/>
	I0520 12:39:03.887746  874942 main.go:141] libmachine: (ha-252263-m03)     <boot dev='hd'/>
	I0520 12:39:03.887757  874942 main.go:141] libmachine: (ha-252263-m03)     <bootmenu enable='no'/>
	I0520 12:39:03.887766  874942 main.go:141] libmachine: (ha-252263-m03)   </os>
	I0520 12:39:03.887776  874942 main.go:141] libmachine: (ha-252263-m03)   <devices>
	I0520 12:39:03.887787  874942 main.go:141] libmachine: (ha-252263-m03)     <disk type='file' device='cdrom'>
	I0520 12:39:03.887819  874942 main.go:141] libmachine: (ha-252263-m03)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/boot2docker.iso'/>
	I0520 12:39:03.887841  874942 main.go:141] libmachine: (ha-252263-m03)       <target dev='hdc' bus='scsi'/>
	I0520 12:39:03.887858  874942 main.go:141] libmachine: (ha-252263-m03)       <readonly/>
	I0520 12:39:03.887874  874942 main.go:141] libmachine: (ha-252263-m03)     </disk>
	I0520 12:39:03.887892  874942 main.go:141] libmachine: (ha-252263-m03)     <disk type='file' device='disk'>
	I0520 12:39:03.887910  874942 main.go:141] libmachine: (ha-252263-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:39:03.887927  874942 main.go:141] libmachine: (ha-252263-m03)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/ha-252263-m03.rawdisk'/>
	I0520 12:39:03.887937  874942 main.go:141] libmachine: (ha-252263-m03)       <target dev='hda' bus='virtio'/>
	I0520 12:39:03.887945  874942 main.go:141] libmachine: (ha-252263-m03)     </disk>
	I0520 12:39:03.887956  874942 main.go:141] libmachine: (ha-252263-m03)     <interface type='network'>
	I0520 12:39:03.887969  874942 main.go:141] libmachine: (ha-252263-m03)       <source network='mk-ha-252263'/>
	I0520 12:39:03.887981  874942 main.go:141] libmachine: (ha-252263-m03)       <model type='virtio'/>
	I0520 12:39:03.888001  874942 main.go:141] libmachine: (ha-252263-m03)     </interface>
	I0520 12:39:03.888015  874942 main.go:141] libmachine: (ha-252263-m03)     <interface type='network'>
	I0520 12:39:03.888027  874942 main.go:141] libmachine: (ha-252263-m03)       <source network='default'/>
	I0520 12:39:03.888038  874942 main.go:141] libmachine: (ha-252263-m03)       <model type='virtio'/>
	I0520 12:39:03.888048  874942 main.go:141] libmachine: (ha-252263-m03)     </interface>
	I0520 12:39:03.888055  874942 main.go:141] libmachine: (ha-252263-m03)     <serial type='pty'>
	I0520 12:39:03.888067  874942 main.go:141] libmachine: (ha-252263-m03)       <target port='0'/>
	I0520 12:39:03.888074  874942 main.go:141] libmachine: (ha-252263-m03)     </serial>
	I0520 12:39:03.888087  874942 main.go:141] libmachine: (ha-252263-m03)     <console type='pty'>
	I0520 12:39:03.888100  874942 main.go:141] libmachine: (ha-252263-m03)       <target type='serial' port='0'/>
	I0520 12:39:03.888128  874942 main.go:141] libmachine: (ha-252263-m03)     </console>
	I0520 12:39:03.888143  874942 main.go:141] libmachine: (ha-252263-m03)     <rng model='virtio'>
	I0520 12:39:03.888159  874942 main.go:141] libmachine: (ha-252263-m03)       <backend model='random'>/dev/random</backend>
	I0520 12:39:03.888175  874942 main.go:141] libmachine: (ha-252263-m03)     </rng>
	I0520 12:39:03.888187  874942 main.go:141] libmachine: (ha-252263-m03)     
	I0520 12:39:03.888197  874942 main.go:141] libmachine: (ha-252263-m03)     
	I0520 12:39:03.888208  874942 main.go:141] libmachine: (ha-252263-m03)   </devices>
	I0520 12:39:03.888219  874942 main.go:141] libmachine: (ha-252263-m03) </domain>
	I0520 12:39:03.888233  874942 main.go:141] libmachine: (ha-252263-m03) 
	I0520 12:39:03.895571  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:c1:14:2a in network default
	I0520 12:39:03.896226  874942 main.go:141] libmachine: (ha-252263-m03) Ensuring networks are active...
	I0520 12:39:03.896251  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:03.896881  874942 main.go:141] libmachine: (ha-252263-m03) Ensuring network default is active
	I0520 12:39:03.897179  874942 main.go:141] libmachine: (ha-252263-m03) Ensuring network mk-ha-252263 is active
	I0520 12:39:03.897566  874942 main.go:141] libmachine: (ha-252263-m03) Getting domain xml...
	I0520 12:39:03.898240  874942 main.go:141] libmachine: (ha-252263-m03) Creating domain...
	I0520 12:39:05.105433  874942 main.go:141] libmachine: (ha-252263-m03) Waiting to get IP...
	I0520 12:39:05.106218  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:05.106605  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:05.106663  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:05.106602  875716 retry.go:31] will retry after 189.118887ms: waiting for machine to come up
	I0520 12:39:05.296891  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:05.297288  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:05.297311  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:05.297271  875716 retry.go:31] will retry after 317.145066ms: waiting for machine to come up
	I0520 12:39:05.615752  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:05.616215  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:05.616249  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:05.616173  875716 retry.go:31] will retry after 447.616745ms: waiting for machine to come up
	I0520 12:39:06.065768  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:06.066232  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:06.066261  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:06.066176  875716 retry.go:31] will retry after 393.855692ms: waiting for machine to come up
	I0520 12:39:06.461797  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:06.462222  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:06.462251  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:06.462182  875716 retry.go:31] will retry after 722.017106ms: waiting for machine to come up
	I0520 12:39:07.186267  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:07.186837  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:07.186893  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:07.186781  875716 retry.go:31] will retry after 812.507046ms: waiting for machine to come up
	I0520 12:39:08.001315  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:08.001815  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:08.001846  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:08.001747  875716 retry.go:31] will retry after 1.17680348s: waiting for machine to come up
	I0520 12:39:09.180416  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:09.180898  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:09.180936  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:09.180842  875716 retry.go:31] will retry after 1.036373954s: waiting for machine to come up
	I0520 12:39:10.218911  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:10.219415  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:10.219449  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:10.219363  875716 retry.go:31] will retry after 1.804364122s: waiting for machine to come up
	I0520 12:39:12.025429  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:12.025849  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:12.025872  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:12.025805  875716 retry.go:31] will retry after 1.662611515s: waiting for machine to come up
	I0520 12:39:13.690240  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:13.690705  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:13.690737  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:13.690645  875716 retry.go:31] will retry after 2.645373784s: waiting for machine to come up
	I0520 12:39:16.337189  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:16.337570  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:16.337604  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:16.337513  875716 retry.go:31] will retry after 2.633391538s: waiting for machine to come up
	I0520 12:39:18.972698  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:18.973123  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:18.973152  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:18.973069  875716 retry.go:31] will retry after 3.486895075s: waiting for machine to come up
	I0520 12:39:22.461839  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:22.462465  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find current IP address of domain ha-252263-m03 in network mk-ha-252263
	I0520 12:39:22.462502  874942 main.go:141] libmachine: (ha-252263-m03) DBG | I0520 12:39:22.462423  875716 retry.go:31] will retry after 4.228316503s: waiting for machine to come up
	I0520 12:39:26.694705  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:26.695188  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has current primary IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:26.695206  874942 main.go:141] libmachine: (ha-252263-m03) Found IP for machine: 192.168.39.60
	I0520 12:39:26.695220  874942 main.go:141] libmachine: (ha-252263-m03) Reserving static IP address...
	I0520 12:39:26.695643  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find host DHCP lease matching {name: "ha-252263-m03", mac: "52:54:00:98:d8:f8", ip: "192.168.39.60"} in network mk-ha-252263
	I0520 12:39:26.769721  874942 main.go:141] libmachine: (ha-252263-m03) Reserved static IP address: 192.168.39.60
	I0520 12:39:26.769772  874942 main.go:141] libmachine: (ha-252263-m03) Waiting for SSH to be available...
	I0520 12:39:26.769782  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Getting to WaitForSSH function...
	I0520 12:39:26.772161  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:26.772548  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263
	I0520 12:39:26.772580  874942 main.go:141] libmachine: (ha-252263-m03) DBG | unable to find defined IP address of network mk-ha-252263 interface with MAC address 52:54:00:98:d8:f8
	I0520 12:39:26.772762  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using SSH client type: external
	I0520 12:39:26.772793  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa (-rw-------)
	I0520 12:39:26.772827  874942 main.go:141] libmachine: (ha-252263-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:39:26.772842  874942 main.go:141] libmachine: (ha-252263-m03) DBG | About to run SSH command:
	I0520 12:39:26.772861  874942 main.go:141] libmachine: (ha-252263-m03) DBG | exit 0
	I0520 12:39:26.776329  874942 main.go:141] libmachine: (ha-252263-m03) DBG | SSH cmd err, output: exit status 255: 
	I0520 12:39:26.776354  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 12:39:26.776368  874942 main.go:141] libmachine: (ha-252263-m03) DBG | command : exit 0
	I0520 12:39:26.776380  874942 main.go:141] libmachine: (ha-252263-m03) DBG | err     : exit status 255
	I0520 12:39:26.776390  874942 main.go:141] libmachine: (ha-252263-m03) DBG | output  : 
	I0520 12:39:29.777276  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Getting to WaitForSSH function...
	I0520 12:39:29.779672  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:29.780071  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:29.780104  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:29.780276  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using SSH client type: external
	I0520 12:39:29.780305  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa (-rw-------)
	I0520 12:39:29.780336  874942 main.go:141] libmachine: (ha-252263-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:39:29.780361  874942 main.go:141] libmachine: (ha-252263-m03) DBG | About to run SSH command:
	I0520 12:39:29.780380  874942 main.go:141] libmachine: (ha-252263-m03) DBG | exit 0
	I0520 12:39:29.902605  874942 main.go:141] libmachine: (ha-252263-m03) DBG | SSH cmd err, output: <nil>: 
	I0520 12:39:29.902890  874942 main.go:141] libmachine: (ha-252263-m03) KVM machine creation complete!
	I0520 12:39:29.903225  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetConfigRaw
	I0520 12:39:29.903833  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:29.904169  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:29.904395  874942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:39:29.904409  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:39:29.905638  874942 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:39:29.905652  874942 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:39:29.905658  874942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:39:29.905666  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:29.907571  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:29.907934  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:29.907968  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:29.908118  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:29.908283  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:29.908447  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:29.908603  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:29.908771  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:29.909043  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:29.909063  874942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:39:30.010161  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:39:30.010190  874942 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:39:30.010201  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.012815  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.013159  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.013185  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.013310  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.013546  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.013709  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.013837  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.013986  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:30.014145  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:30.014154  874942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:39:30.115542  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:39:30.115622  874942 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:39:30.115631  874942 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:39:30.115641  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetMachineName
	I0520 12:39:30.115952  874942 buildroot.go:166] provisioning hostname "ha-252263-m03"
	I0520 12:39:30.115985  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetMachineName
	I0520 12:39:30.116212  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.118895  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.119439  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.119467  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.119612  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.119825  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.119969  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.120096  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.120294  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:30.120465  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:30.120478  874942 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-252263-m03 && echo "ha-252263-m03" | sudo tee /etc/hostname
	I0520 12:39:30.237531  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263-m03
	
	I0520 12:39:30.237558  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.240315  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.240676  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.240706  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.240915  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.241108  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.241259  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.241373  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.241633  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:30.241807  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:30.241825  874942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-252263-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-252263-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-252263-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:39:30.352476  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:39:30.352505  874942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 12:39:30.352522  874942 buildroot.go:174] setting up certificates
	I0520 12:39:30.352530  874942 provision.go:84] configureAuth start
	I0520 12:39:30.352540  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetMachineName
	I0520 12:39:30.352840  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:39:30.355295  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.355699  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.355725  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.355876  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.358109  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.358528  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.358557  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.358698  874942 provision.go:143] copyHostCerts
	I0520 12:39:30.358733  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:39:30.358794  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 12:39:30.358806  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:39:30.358902  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 12:39:30.358998  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:39:30.359023  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 12:39:30.359038  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:39:30.359077  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 12:39:30.359146  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:39:30.359171  874942 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 12:39:30.359181  874942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:39:30.359216  874942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 12:39:30.359278  874942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.ha-252263-m03 san=[127.0.0.1 192.168.39.60 ha-252263-m03 localhost minikube]
	I0520 12:39:30.469167  874942 provision.go:177] copyRemoteCerts
	I0520 12:39:30.469224  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:39:30.469251  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.471791  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.472232  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.472256  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.472471  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.472658  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.472808  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.472917  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:39:30.557144  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 12:39:30.557205  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 12:39:30.585069  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 12:39:30.585153  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 12:39:30.612358  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 12:39:30.612431  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:39:30.638936  874942 provision.go:87] duration metric: took 286.390722ms to configureAuth
	I0520 12:39:30.638969  874942 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:39:30.639205  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:39:30.639292  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.642201  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.642549  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.642578  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.642744  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.642974  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.643162  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.643313  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.643509  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:30.643682  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:30.643704  874942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:39:30.918204  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:39:30.918247  874942 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:39:30.918264  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetURL
	I0520 12:39:30.919832  874942 main.go:141] libmachine: (ha-252263-m03) DBG | Using libvirt version 6000000
	I0520 12:39:30.922674  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.923095  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.923137  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.923326  874942 main.go:141] libmachine: Docker is up and running!
	I0520 12:39:30.923338  874942 main.go:141] libmachine: Reticulating splines...
	I0520 12:39:30.923346  874942 client.go:171] duration metric: took 27.518176652s to LocalClient.Create
	I0520 12:39:30.923372  874942 start.go:167] duration metric: took 27.518234415s to libmachine.API.Create "ha-252263"
	I0520 12:39:30.923386  874942 start.go:293] postStartSetup for "ha-252263-m03" (driver="kvm2")
	I0520 12:39:30.923403  874942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:39:30.923426  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:30.923669  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:39:30.923705  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:30.925871  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.926250  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:30.926275  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:30.926427  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:30.926580  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:30.926788  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:30.926941  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:39:31.004832  874942 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:39:31.009213  874942 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:39:31.009238  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 12:39:31.009302  874942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 12:39:31.009388  874942 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 12:39:31.009399  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 12:39:31.009502  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:39:31.018921  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:39:31.041753  874942 start.go:296] duration metric: took 118.352566ms for postStartSetup
	I0520 12:39:31.041802  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetConfigRaw
	I0520 12:39:31.042324  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:39:31.045019  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.045387  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.045412  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.045723  874942 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:39:31.045972  874942 start.go:128] duration metric: took 27.660113785s to createHost
	I0520 12:39:31.046007  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:31.048377  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.048756  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.048789  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.048924  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:31.049136  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:31.049311  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:31.049478  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:31.049653  874942 main.go:141] libmachine: Using SSH client type: native
	I0520 12:39:31.049859  874942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0520 12:39:31.049874  874942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:39:31.152020  874942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716208771.129480022
	
	I0520 12:39:31.152044  874942 fix.go:216] guest clock: 1716208771.129480022
	I0520 12:39:31.152053  874942 fix.go:229] Guest: 2024-05-20 12:39:31.129480022 +0000 UTC Remote: 2024-05-20 12:39:31.045989813 +0000 UTC m=+155.557772815 (delta=83.490209ms)
	I0520 12:39:31.152077  874942 fix.go:200] guest clock delta is within tolerance: 83.490209ms
	I0520 12:39:31.152084  874942 start.go:83] releasing machines lock for "ha-252263-m03", held for 27.766362061s
	I0520 12:39:31.152108  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:31.152411  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:39:31.154957  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.155385  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.155419  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.157319  874942 out.go:177] * Found network options:
	I0520 12:39:31.158655  874942 out.go:177]   - NO_PROXY=192.168.39.182,192.168.39.22
	W0520 12:39:31.159809  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 12:39:31.159828  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 12:39:31.159842  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:31.160356  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:31.160575  874942 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:39:31.160676  874942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:39:31.160721  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	W0520 12:39:31.160754  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 12:39:31.160788  874942 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 12:39:31.160859  874942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:39:31.160881  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:39:31.163394  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.163529  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.163791  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.163819  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.163955  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:31.164040  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:31.164061  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:31.164140  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:31.164228  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:39:31.164320  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:31.164386  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:39:31.164455  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:39:31.164504  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:39:31.164643  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:39:31.394977  874942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:39:31.401332  874942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:39:31.401415  874942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:39:31.418045  874942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:39:31.418070  874942 start.go:494] detecting cgroup driver to use...
	I0520 12:39:31.418146  874942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:39:31.435442  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:39:31.449967  874942 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:39:31.450040  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:39:31.463884  874942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:39:31.478183  874942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:39:31.605461  874942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:39:31.753952  874942 docker.go:233] disabling docker service ...
	I0520 12:39:31.754030  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:39:31.768796  874942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:39:31.781871  874942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:39:31.923469  874942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:39:32.048131  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:39:32.061578  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:39:32.080250  874942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:39:32.080322  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.091344  874942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:39:32.091412  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.102979  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.114019  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.124736  874942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:39:32.135603  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.149479  874942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.168430  874942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:39:32.180071  874942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:39:32.190436  874942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:39:32.190503  874942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:39:32.204611  874942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:39:32.214110  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:39:32.344192  874942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:39:32.481893  874942 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:39:32.481977  874942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:39:32.487357  874942 start.go:562] Will wait 60s for crictl version
	I0520 12:39:32.487426  874942 ssh_runner.go:195] Run: which crictl
	I0520 12:39:32.491658  874942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:39:32.532074  874942 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:39:32.532178  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:39:32.562070  874942 ssh_runner.go:195] Run: crio --version
	I0520 12:39:32.593794  874942 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:39:32.595067  874942 out.go:177]   - env NO_PROXY=192.168.39.182
	I0520 12:39:32.596194  874942 out.go:177]   - env NO_PROXY=192.168.39.182,192.168.39.22
	I0520 12:39:32.597283  874942 main.go:141] libmachine: (ha-252263-m03) Calling .GetIP
	I0520 12:39:32.599980  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:32.600292  874942 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:39:32.600322  874942 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:39:32.600478  874942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:39:32.605498  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:39:32.621055  874942 mustload.go:65] Loading cluster: ha-252263
	I0520 12:39:32.621295  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:39:32.621555  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:39:32.621605  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:39:32.637339  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0520 12:39:32.637773  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:39:32.638218  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:39:32.638241  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:39:32.638541  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:39:32.638738  874942 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:39:32.640317  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:39:32.640707  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:39:32.640754  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:39:32.655112  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0520 12:39:32.655469  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:39:32.655876  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:39:32.655898  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:39:32.656237  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:39:32.656449  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:39:32.656611  874942 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263 for IP: 192.168.39.60
	I0520 12:39:32.656624  874942 certs.go:194] generating shared ca certs ...
	I0520 12:39:32.656643  874942 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:39:32.656761  874942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 12:39:32.656808  874942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 12:39:32.656817  874942 certs.go:256] generating profile certs ...
	I0520 12:39:32.656891  874942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key
	I0520 12:39:32.656915  874942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.643bcb5d
	I0520 12:39:32.656928  874942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.643bcb5d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182 192.168.39.22 192.168.39.60 192.168.39.254]
	I0520 12:39:32.811740  874942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.643bcb5d ...
	I0520 12:39:32.811772  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.643bcb5d: {Name:mk2490347f6aab00b81e510d8c0a07675811ea03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:39:32.811936  874942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.643bcb5d ...
	I0520 12:39:32.811947  874942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.643bcb5d: {Name:mkffe5436ecc0b97d71ed455d88101b1f79fe6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:39:32.812012  874942 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.643bcb5d -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt
	I0520 12:39:32.812145  874942 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.643bcb5d -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key
	I0520 12:39:32.812273  874942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key
	I0520 12:39:32.812289  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 12:39:32.812302  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 12:39:32.812315  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 12:39:32.812327  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 12:39:32.812340  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 12:39:32.812352  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 12:39:32.812360  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 12:39:32.812370  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 12:39:32.812417  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 12:39:32.812443  874942 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 12:39:32.812452  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 12:39:32.812474  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 12:39:32.812495  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:39:32.812514  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 12:39:32.812549  874942 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:39:32.812576  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:39:32.812589  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 12:39:32.812601  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 12:39:32.812637  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:39:32.816229  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:39:32.816613  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:39:32.816653  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:39:32.816810  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:39:32.817016  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:39:32.817151  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:39:32.817299  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:39:32.891183  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 12:39:32.898524  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 12:39:32.910270  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 12:39:32.914810  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0520 12:39:32.926151  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 12:39:32.930611  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 12:39:32.941555  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 12:39:32.946145  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 12:39:32.956373  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 12:39:32.960354  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 12:39:32.970797  874942 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 12:39:32.974673  874942 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0520 12:39:32.984778  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:39:33.012166  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:39:33.036483  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:39:33.061487  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 12:39:33.089543  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0520 12:39:33.115462  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:39:33.139446  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:39:33.166323  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:39:33.192642  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:39:33.217038  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 12:39:33.241040  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 12:39:33.265169  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 12:39:33.281034  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0520 12:39:33.297163  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 12:39:33.313889  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 12:39:33.330334  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 12:39:33.347544  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0520 12:39:33.364181  874942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 12:39:33.380863  874942 ssh_runner.go:195] Run: openssl version
	I0520 12:39:33.386771  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:39:33.397901  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:39:33.402622  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:39:33.402687  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:39:33.408489  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:39:33.419751  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 12:39:33.430108  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 12:39:33.434929  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 12:39:33.434970  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 12:39:33.440729  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 12:39:33.452477  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 12:39:33.463377  874942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 12:39:33.467994  874942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 12:39:33.468049  874942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 12:39:33.473960  874942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 12:39:33.484269  874942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:39:33.488247  874942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 12:39:33.488301  874942 kubeadm.go:928] updating node {m03 192.168.39.60 8443 v1.30.1 crio true true} ...
	I0520 12:39:33.488395  874942 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-252263-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:39:33.488431  874942 kube-vip.go:115] generating kube-vip config ...
	I0520 12:39:33.488467  874942 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 12:39:33.504248  874942 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 12:39:33.504352  874942 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 12:39:33.504406  874942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:39:33.513663  874942 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 12:39:33.513719  874942 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 12:39:33.522595  874942 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 12:39:33.522621  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 12:39:33.522635  874942 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 12:39:33.522640  874942 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 12:39:33.522655  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 12:39:33.522685  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:39:33.522696  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 12:39:33.522751  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 12:39:33.527003  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 12:39:33.527034  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 12:39:33.554916  874942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 12:39:33.555021  874942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 12:39:33.554928  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 12:39:33.555083  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 12:39:33.590624  874942 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 12:39:33.590665  874942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 12:39:34.423074  874942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 12:39:34.432920  874942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 12:39:34.449539  874942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:39:34.466654  874942 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 12:39:34.483572  874942 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 12:39:34.487390  874942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 12:39:34.500354  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:39:34.625707  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:39:34.645038  874942 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:39:34.645615  874942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:39:34.645683  874942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:39:34.663218  874942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0520 12:39:34.663759  874942 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:39:34.664283  874942 main.go:141] libmachine: Using API Version  1
	I0520 12:39:34.664307  874942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:39:34.664619  874942 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:39:34.664875  874942 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:39:34.665068  874942 start.go:316] joinCluster: &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:39:34.665191  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 12:39:34.665222  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:39:34.668649  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:39:34.669191  874942 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:39:34.669220  874942 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:39:34.669393  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:39:34.669566  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:39:34.669714  874942 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:39:34.669907  874942 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:39:34.916769  874942 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:39:34.916839  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pu2q37.5isfc5ba65e0sin1 --discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-252263-m03 --control-plane --apiserver-advertise-address=192.168.39.60 --apiserver-bind-port=8443"
	I0520 12:39:58.501906  874942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pu2q37.5isfc5ba65e0sin1 --discovery-token-ca-cert-hash sha256:4efa215a61e92767de74ed297b906742018545807548258791bcd64d976858a3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-252263-m03 --control-plane --apiserver-advertise-address=192.168.39.60 --apiserver-bind-port=8443": (23.585033533s)
	I0520 12:39:58.501957  874942 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 12:39:59.121244  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-252263-m03 minikube.k8s.io/updated_at=2024_05_20T12_39_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=ha-252263 minikube.k8s.io/primary=false
	I0520 12:39:59.233711  874942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-252263-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 12:39:59.359715  874942 start.go:318] duration metric: took 24.694639977s to joinCluster
	I0520 12:39:59.359826  874942 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:39:59.360194  874942 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:39:59.361582  874942 out.go:177] * Verifying Kubernetes components...
	I0520 12:39:59.362954  874942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:39:59.660541  874942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:39:59.730100  874942 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:39:59.730552  874942 kapi.go:59] client config for ha-252263: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.crt", KeyFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key", CAFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 12:39:59.730659  874942 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.182:8443
	I0520 12:39:59.731013  874942 node_ready.go:35] waiting up to 6m0s for node "ha-252263-m03" to be "Ready" ...
	I0520 12:39:59.731135  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:39:59.731148  874942 round_trippers.go:469] Request Headers:
	I0520 12:39:59.731163  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:39:59.731171  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:39:59.733785  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:00.231834  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:00.231857  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:00.231865  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:00.231869  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:00.235785  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:00.731900  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:00.731924  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:00.731932  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:00.731936  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:00.736011  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:01.231724  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:01.231776  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:01.231788  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:01.231797  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:01.236497  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:01.731514  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:01.731537  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:01.731546  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:01.731550  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:01.736001  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:01.736882  874942 node_ready.go:53] node "ha-252263-m03" has status "Ready":"False"
	I0520 12:40:02.232193  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:02.232222  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.232232  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.232236  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.235025  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:02.731753  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:02.731783  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.731794  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.731802  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.735081  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:02.735826  874942 node_ready.go:49] node "ha-252263-m03" has status "Ready":"True"
	I0520 12:40:02.735847  874942 node_ready.go:38] duration metric: took 3.004807659s for node "ha-252263-m03" to be "Ready" ...
	I0520 12:40:02.735857  874942 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:40:02.735920  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:02.735928  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.735936  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.735943  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.743026  874942 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 12:40:02.751704  874942 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.751810  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-96h5w
	I0520 12:40:02.751820  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.751829  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.751836  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.754604  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:02.755245  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:02.755263  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.755274  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.755280  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.758437  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:02.759209  874942 pod_ready.go:92] pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:02.759233  874942 pod_ready.go:81] duration metric: took 7.496777ms for pod "coredns-7db6d8ff4d-96h5w" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.759246  874942 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.759320  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c2vkj
	I0520 12:40:02.759331  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.759341  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.759347  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.761860  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:02.762510  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:02.762524  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.762532  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.762535  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.765184  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:02.765768  874942 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:02.765785  874942 pod_ready.go:81] duration metric: took 6.527198ms for pod "coredns-7db6d8ff4d-c2vkj" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.765796  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.765855  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263
	I0520 12:40:02.765863  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.765872  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.765880  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.769921  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:02.770920  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:02.770939  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.770950  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.770958  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.774265  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:02.774894  874942 pod_ready.go:92] pod "etcd-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:02.774916  874942 pod_ready.go:81] duration metric: took 9.111753ms for pod "etcd-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.774928  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.775003  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m02
	I0520 12:40:02.775014  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.775023  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.775026  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.779947  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:02.780990  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:02.781008  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.781017  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.781025  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.785008  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:02.785537  874942 pod_ready.go:92] pod "etcd-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:02.785551  874942 pod_ready.go:81] duration metric: took 10.616344ms for pod "etcd-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.785560  874942 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:02.931878  874942 request.go:629] Waited for 146.224222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:02.931940  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:02.931949  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:02.931960  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:02.931970  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:02.935466  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:03.132477  874942 request.go:629] Waited for 196.395923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.132541  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.132546  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.132554  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.132561  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.135715  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:03.331966  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:03.331990  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.331999  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.332006  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.335246  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:03.532442  874942 request.go:629] Waited for 196.411511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.532542  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.532553  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.532561  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.532568  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.536617  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:03.786367  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:03.786393  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.786401  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.786406  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.789859  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:03.932022  874942 request.go:629] Waited for 141.329959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.932099  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:03.932106  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:03.932116  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:03.932125  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:03.935494  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.286317  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:04.286341  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:04.286349  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:04.286354  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:04.290198  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.332286  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:04.332311  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:04.332322  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:04.332326  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:04.335814  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.786334  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:04.786358  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:04.786366  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:04.786371  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:04.789630  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.790424  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:04.790439  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:04.790447  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:04.790452  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:04.793731  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:04.794308  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:05.286800  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:05.286824  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:05.286835  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:05.286840  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:05.289844  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:05.290904  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:05.290919  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:05.290929  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:05.290935  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:05.293763  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:05.786624  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:05.786649  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:05.786659  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:05.786665  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:05.790724  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:05.791844  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:05.791860  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:05.791870  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:05.791875  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:05.795129  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:06.286239  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:06.286267  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:06.286275  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:06.286282  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:06.290346  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:06.292015  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:06.292035  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:06.292045  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:06.292050  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:06.294742  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:06.785780  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:06.785807  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:06.785814  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:06.785818  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:06.789096  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:06.789828  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:06.789846  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:06.789854  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:06.789860  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:06.792646  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:07.285744  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:07.285773  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:07.285784  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:07.285790  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:07.288771  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:07.289425  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:07.289444  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:07.289452  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:07.289455  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:07.292222  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:07.292904  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:07.786652  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:07.786674  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:07.786682  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:07.786687  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:07.790245  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:07.791028  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:07.791045  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:07.791050  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:07.791053  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:07.793963  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:08.286253  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:08.286284  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:08.286294  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:08.286301  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:08.289986  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:08.290909  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:08.290928  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:08.290942  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:08.290950  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:08.294015  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:08.786612  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:08.786645  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:08.786659  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:08.786664  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:08.790127  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:08.790866  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:08.790885  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:08.790896  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:08.790903  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:08.794534  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:09.286643  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:09.286666  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:09.286674  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:09.286677  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:09.289876  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:09.290439  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:09.290453  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:09.290461  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:09.290467  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:09.293570  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:09.294097  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:09.785903  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:09.785931  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:09.785943  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:09.785951  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:09.789125  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:09.789798  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:09.789815  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:09.789826  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:09.789831  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:09.792347  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:10.286716  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:10.286742  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:10.286748  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:10.286752  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:10.291643  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:10.292601  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:10.292618  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:10.292630  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:10.292637  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:10.295979  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:10.786472  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:10.786494  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:10.786503  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:10.786507  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:10.789576  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:10.790460  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:10.790478  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:10.790486  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:10.790492  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:10.793941  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:11.286457  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:11.286477  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:11.286486  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:11.286490  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:11.289966  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:11.291052  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:11.291118  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:11.291137  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:11.291143  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:11.294395  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:11.295277  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:11.785884  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:11.785911  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:11.785925  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:11.785934  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:11.789826  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:11.790885  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:11.790899  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:11.790907  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:11.790910  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:11.794609  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:12.286594  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:12.286616  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:12.286625  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:12.286630  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:12.290795  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:12.291822  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:12.291851  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:12.291861  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:12.291875  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:12.295092  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:12.786754  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:12.786780  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:12.786791  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:12.786796  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:12.789985  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:12.790797  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:12.790813  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:12.790820  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:12.790824  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:12.793863  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:13.286184  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:13.286209  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:13.286218  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:13.286222  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:13.289266  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:13.290009  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:13.290024  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:13.290032  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:13.290036  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:13.292839  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:13.786046  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:13.786069  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:13.786078  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:13.786081  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:13.790433  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:13.791704  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:13.791725  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:13.791741  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:13.791748  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:13.794517  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:13.795187  874942 pod_ready.go:102] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 12:40:14.286086  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:14.286107  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.286115  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.286119  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.288995  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.289953  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:14.289970  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.289979  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.289984  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.292808  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.786110  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/etcd-ha-252263-m03
	I0520 12:40:14.786134  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.786141  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.786147  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.789532  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:14.790246  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:14.790262  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.790270  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.790274  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.793033  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.793542  874942 pod_ready.go:92] pod "etcd-ha-252263-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.793565  874942 pod_ready.go:81] duration metric: took 12.007998033s for pod "etcd-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.793588  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.793671  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263
	I0520 12:40:14.793685  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.793695  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.793700  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.795964  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.796706  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:14.796724  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.796732  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.796737  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.798788  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.799307  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.799328  874942 pod_ready.go:81] duration metric: took 5.730111ms for pod "kube-apiserver-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.799340  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.799401  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263-m02
	I0520 12:40:14.799409  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.799418  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.799425  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.801975  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.802506  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:14.802519  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.802525  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.802528  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.804535  874942 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 12:40:14.805043  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.805058  874942 pod_ready.go:81] duration metric: took 5.710651ms for pod "kube-apiserver-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.805066  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.805116  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-252263-m03
	I0520 12:40:14.805124  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.805130  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.805135  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.807349  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.808000  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:14.808016  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.808026  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.808031  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.810398  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.810900  874942 pod_ready.go:92] pod "kube-apiserver-ha-252263-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.810922  874942 pod_ready.go:81] duration metric: took 5.849942ms for pod "kube-apiserver-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.810933  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.810990  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263
	I0520 12:40:14.811000  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.811010  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.811018  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.813091  874942 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 12:40:14.813524  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:14.813538  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.813545  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.813549  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.815403  874942 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 12:40:14.815740  874942 pod_ready.go:92] pod "kube-controller-manager-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:14.815754  874942 pod_ready.go:81] duration metric: took 4.814235ms for pod "kube-controller-manager-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.815763  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:14.987195  874942 request.go:629] Waited for 171.343784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263-m02
	I0520 12:40:14.987256  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263-m02
	I0520 12:40:14.987263  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:14.987271  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:14.987277  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:14.990306  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.186567  874942 request.go:629] Waited for 195.376606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:15.186643  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:15.186651  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.186666  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.186674  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.190350  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.190909  874942 pod_ready.go:92] pod "kube-controller-manager-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:15.190933  874942 pod_ready.go:81] duration metric: took 375.159925ms for pod "kube-controller-manager-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.190951  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.387116  874942 request.go:629] Waited for 196.056387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263-m03
	I0520 12:40:15.387214  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-252263-m03
	I0520 12:40:15.387234  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.387244  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.387260  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.390370  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.586350  874942 request.go:629] Waited for 194.939417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:15.586427  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:15.586432  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.586440  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.586447  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.589805  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.590499  874942 pod_ready.go:92] pod "kube-controller-manager-ha-252263-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:15.590517  874942 pod_ready.go:81] duration metric: took 399.555096ms for pod "kube-controller-manager-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.590529  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-84x7f" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.786923  874942 request.go:629] Waited for 196.315135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84x7f
	I0520 12:40:15.787012  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84x7f
	I0520 12:40:15.787046  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.787062  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.787074  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.790375  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.986220  874942 request.go:629] Waited for 195.226495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:15.986309  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:15.986318  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:15.986325  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:15.986330  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:15.989485  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:15.990321  874942 pod_ready.go:92] pod "kube-proxy-84x7f" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:15.990340  874942 pod_ready.go:81] duration metric: took 399.802434ms for pod "kube-proxy-84x7f" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:15.990350  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c8zs5" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.186454  874942 request.go:629] Waited for 196.021403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c8zs5
	I0520 12:40:16.186542  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c8zs5
	I0520 12:40:16.186561  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.186588  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.186598  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.189804  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:16.386784  874942 request.go:629] Waited for 196.311388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:16.386870  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:16.386878  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.386888  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.386895  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.390239  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:16.390889  874942 pod_ready.go:92] pod "kube-proxy-c8zs5" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:16.390911  874942 pod_ready.go:81] duration metric: took 400.553474ms for pod "kube-proxy-c8zs5" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.390923  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5zvt" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.587027  874942 request.go:629] Waited for 196.000061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5zvt
	I0520 12:40:16.587091  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5zvt
	I0520 12:40:16.587095  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.587104  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.587115  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.590184  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:16.786264  874942 request.go:629] Waited for 195.288041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:16.786329  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:16.786336  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.786347  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.786356  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.790169  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:16.790677  874942 pod_ready.go:92] pod "kube-proxy-z5zvt" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:16.790698  874942 pod_ready.go:81] duration metric: took 399.767609ms for pod "kube-proxy-z5zvt" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.790708  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:16.986973  874942 request.go:629] Waited for 196.161345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263
	I0520 12:40:16.987053  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263
	I0520 12:40:16.987070  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:16.987081  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:16.987086  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:16.990504  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.186800  874942 request.go:629] Waited for 195.321566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:17.186903  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263
	I0520 12:40:17.186911  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.186922  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.186930  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.190016  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.190633  874942 pod_ready.go:92] pod "kube-scheduler-ha-252263" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:17.190658  874942 pod_ready.go:81] duration metric: took 399.940903ms for pod "kube-scheduler-ha-252263" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.190673  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.386723  874942 request.go:629] Waited for 195.940912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263-m02
	I0520 12:40:17.386787  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263-m02
	I0520 12:40:17.386792  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.386800  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.386805  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.390225  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.586762  874942 request.go:629] Waited for 195.433055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:17.586852  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m02
	I0520 12:40:17.586857  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.586865  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.586871  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.590114  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.590779  874942 pod_ready.go:92] pod "kube-scheduler-ha-252263-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:17.590803  874942 pod_ready.go:81] duration metric: took 400.117772ms for pod "kube-scheduler-ha-252263-m02" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.590815  874942 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.786573  874942 request.go:629] Waited for 195.642396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263-m03
	I0520 12:40:17.786673  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-252263-m03
	I0520 12:40:17.786683  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.786694  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.786703  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.789794  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.987072  874942 request.go:629] Waited for 196.422346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:17.987141  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes/ha-252263-m03
	I0520 12:40:17.987146  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:17.987154  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:17.987160  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:17.990724  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:17.991355  874942 pod_ready.go:92] pod "kube-scheduler-ha-252263-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 12:40:17.991379  874942 pod_ready.go:81] duration metric: took 400.554642ms for pod "kube-scheduler-ha-252263-m03" in "kube-system" namespace to be "Ready" ...
	I0520 12:40:17.991393  874942 pod_ready.go:38] duration metric: took 15.255524587s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:40:17.991412  874942 api_server.go:52] waiting for apiserver process to appear ...
	I0520 12:40:17.991482  874942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:40:18.011407  874942 api_server.go:72] duration metric: took 18.651540784s to wait for apiserver process to appear ...
	I0520 12:40:18.011432  874942 api_server.go:88] waiting for apiserver healthz status ...
	I0520 12:40:18.011456  874942 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0520 12:40:18.019993  874942 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0520 12:40:18.020061  874942 round_trippers.go:463] GET https://192.168.39.182:8443/version
	I0520 12:40:18.020067  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.020079  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.020087  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.021263  874942 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 12:40:18.021325  874942 api_server.go:141] control plane version: v1.30.1
	I0520 12:40:18.021341  874942 api_server.go:131] duration metric: took 9.901753ms to wait for apiserver health ...
	I0520 12:40:18.021355  874942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 12:40:18.186656  874942 request.go:629] Waited for 165.194872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:18.186739  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:18.186757  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.186770  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.186776  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.193718  874942 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 12:40:18.199912  874942 system_pods.go:59] 24 kube-system pods found
	I0520 12:40:18.199936  874942 system_pods.go:61] "coredns-7db6d8ff4d-96h5w" [3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf] Running
	I0520 12:40:18.199940  874942 system_pods.go:61] "coredns-7db6d8ff4d-c2vkj" [a5fa83f0-abaa-4c78-8d08-124503934fb1] Running
	I0520 12:40:18.199944  874942 system_pods.go:61] "etcd-ha-252263" [d5d0140d-3bf7-4b3f-9a11-b275e9800f1d] Running
	I0520 12:40:18.199947  874942 system_pods.go:61] "etcd-ha-252263-m02" [1a626412-42d2-478b-9ebf-891abf9e9a5a] Running
	I0520 12:40:18.199950  874942 system_pods.go:61] "etcd-ha-252263-m03" [76500ab4-ce7c-43b9-868b-f46f90fc54c4] Running
	I0520 12:40:18.199953  874942 system_pods.go:61] "kindnet-8vkjc" [b222e7ad-6005-42bf-867f-40b94d584782] Running
	I0520 12:40:18.199956  874942 system_pods.go:61] "kindnet-d67g2" [a66b7178-4b9d-4958-898b-37ff6350432a] Running
	I0520 12:40:18.199958  874942 system_pods.go:61] "kindnet-lfz72" [dcfb2815-bac5-46fd-b65e-6fa4cbc748be] Running
	I0520 12:40:18.199961  874942 system_pods.go:61] "kube-apiserver-ha-252263" [69e7f726-e571-41dd-a16e-10f4b495d230] Running
	I0520 12:40:18.199965  874942 system_pods.go:61] "kube-apiserver-ha-252263-m02" [6cecadf0-4518-4744-aa2b-81a27c1cfb0d] Running
	I0520 12:40:18.199969  874942 system_pods.go:61] "kube-apiserver-ha-252263-m03" [7f48b761-0d1e-48f3-8281-27a491a2a4b2] Running
	I0520 12:40:18.199972  874942 system_pods.go:61] "kube-controller-manager-ha-252263" [51976a74-4436-45cc-9192-6d0af34f30b0] Running
	I0520 12:40:18.199978  874942 system_pods.go:61] "kube-controller-manager-ha-252263-m02" [72556438-654e-4070-ad00-d3e737db68dd] Running
	I0520 12:40:18.199983  874942 system_pods.go:61] "kube-controller-manager-ha-252263-m03" [09306613-e277-4460-9e5a-0b52e864207e] Running
	I0520 12:40:18.199988  874942 system_pods.go:61] "kube-proxy-84x7f" [af9df182-185d-479e-abf7-7bcb3709d039] Running
	I0520 12:40:18.199991  874942 system_pods.go:61] "kube-proxy-c8zs5" [0a2ddd4c-b435-4bd5-9a31-16f8ea676656] Running
	I0520 12:40:18.199997  874942 system_pods.go:61] "kube-proxy-z5zvt" [fd9f5f1f-60ac-4567-8d5c-b2de0404623f] Running
	I0520 12:40:18.200000  874942 system_pods.go:61] "kube-scheduler-ha-252263" [a6b8dabc-a8a1-46b3-ae41-ecb026648fe3] Running
	I0520 12:40:18.200003  874942 system_pods.go:61] "kube-scheduler-ha-252263-m02" [bafebb09-b0c8-481f-8808-d4396c2b28cb] Running
	I0520 12:40:18.200006  874942 system_pods.go:61] "kube-scheduler-ha-252263-m03" [feb4de60-8201-433b-9ac4-bf0e28dac337] Running
	I0520 12:40:18.200010  874942 system_pods.go:61] "kube-vip-ha-252263" [6e5827b4-5a1c-4523-9282-8c901ab68b5a] Running
	I0520 12:40:18.200013  874942 system_pods.go:61] "kube-vip-ha-252263-m02" [d33ac9fa-d81e-4676-a735-76f6709c3695] Running
	I0520 12:40:18.200015  874942 system_pods.go:61] "kube-vip-ha-252263-m03" [52e2d893-a58f-4e3d-83d9-208bd7f3b04f] Running
	I0520 12:40:18.200018  874942 system_pods.go:61] "storage-provisioner" [5db18dbf-710f-4c10-84bb-c5120c865740] Running
	I0520 12:40:18.200022  874942 system_pods.go:74] duration metric: took 178.659158ms to wait for pod list to return data ...
	I0520 12:40:18.200030  874942 default_sa.go:34] waiting for default service account to be created ...
	I0520 12:40:18.386458  874942 request.go:629] Waited for 186.350768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/default/serviceaccounts
	I0520 12:40:18.386519  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/default/serviceaccounts
	I0520 12:40:18.386523  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.386531  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.386534  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.390190  874942 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 12:40:18.390324  874942 default_sa.go:45] found service account: "default"
	I0520 12:40:18.390345  874942 default_sa.go:55] duration metric: took 190.306583ms for default service account to be created ...
	I0520 12:40:18.390356  874942 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 12:40:18.587000  874942 request.go:629] Waited for 196.552739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:18.587066  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/namespaces/kube-system/pods
	I0520 12:40:18.587071  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.587080  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.587083  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.593251  874942 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 12:40:18.600444  874942 system_pods.go:86] 24 kube-system pods found
	I0520 12:40:18.600472  874942 system_pods.go:89] "coredns-7db6d8ff4d-96h5w" [3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf] Running
	I0520 12:40:18.600478  874942 system_pods.go:89] "coredns-7db6d8ff4d-c2vkj" [a5fa83f0-abaa-4c78-8d08-124503934fb1] Running
	I0520 12:40:18.600483  874942 system_pods.go:89] "etcd-ha-252263" [d5d0140d-3bf7-4b3f-9a11-b275e9800f1d] Running
	I0520 12:40:18.600487  874942 system_pods.go:89] "etcd-ha-252263-m02" [1a626412-42d2-478b-9ebf-891abf9e9a5a] Running
	I0520 12:40:18.600491  874942 system_pods.go:89] "etcd-ha-252263-m03" [76500ab4-ce7c-43b9-868b-f46f90fc54c4] Running
	I0520 12:40:18.600494  874942 system_pods.go:89] "kindnet-8vkjc" [b222e7ad-6005-42bf-867f-40b94d584782] Running
	I0520 12:40:18.600499  874942 system_pods.go:89] "kindnet-d67g2" [a66b7178-4b9d-4958-898b-37ff6350432a] Running
	I0520 12:40:18.600503  874942 system_pods.go:89] "kindnet-lfz72" [dcfb2815-bac5-46fd-b65e-6fa4cbc748be] Running
	I0520 12:40:18.600507  874942 system_pods.go:89] "kube-apiserver-ha-252263" [69e7f726-e571-41dd-a16e-10f4b495d230] Running
	I0520 12:40:18.600511  874942 system_pods.go:89] "kube-apiserver-ha-252263-m02" [6cecadf0-4518-4744-aa2b-81a27c1cfb0d] Running
	I0520 12:40:18.600518  874942 system_pods.go:89] "kube-apiserver-ha-252263-m03" [7f48b761-0d1e-48f3-8281-27a491a2a4b2] Running
	I0520 12:40:18.600522  874942 system_pods.go:89] "kube-controller-manager-ha-252263" [51976a74-4436-45cc-9192-6d0af34f30b0] Running
	I0520 12:40:18.600530  874942 system_pods.go:89] "kube-controller-manager-ha-252263-m02" [72556438-654e-4070-ad00-d3e737db68dd] Running
	I0520 12:40:18.600533  874942 system_pods.go:89] "kube-controller-manager-ha-252263-m03" [09306613-e277-4460-9e5a-0b52e864207e] Running
	I0520 12:40:18.600537  874942 system_pods.go:89] "kube-proxy-84x7f" [af9df182-185d-479e-abf7-7bcb3709d039] Running
	I0520 12:40:18.600541  874942 system_pods.go:89] "kube-proxy-c8zs5" [0a2ddd4c-b435-4bd5-9a31-16f8ea676656] Running
	I0520 12:40:18.600546  874942 system_pods.go:89] "kube-proxy-z5zvt" [fd9f5f1f-60ac-4567-8d5c-b2de0404623f] Running
	I0520 12:40:18.600550  874942 system_pods.go:89] "kube-scheduler-ha-252263" [a6b8dabc-a8a1-46b3-ae41-ecb026648fe3] Running
	I0520 12:40:18.600554  874942 system_pods.go:89] "kube-scheduler-ha-252263-m02" [bafebb09-b0c8-481f-8808-d4396c2b28cb] Running
	I0520 12:40:18.600560  874942 system_pods.go:89] "kube-scheduler-ha-252263-m03" [feb4de60-8201-433b-9ac4-bf0e28dac337] Running
	I0520 12:40:18.600564  874942 system_pods.go:89] "kube-vip-ha-252263" [6e5827b4-5a1c-4523-9282-8c901ab68b5a] Running
	I0520 12:40:18.600570  874942 system_pods.go:89] "kube-vip-ha-252263-m02" [d33ac9fa-d81e-4676-a735-76f6709c3695] Running
	I0520 12:40:18.600573  874942 system_pods.go:89] "kube-vip-ha-252263-m03" [52e2d893-a58f-4e3d-83d9-208bd7f3b04f] Running
	I0520 12:40:18.600577  874942 system_pods.go:89] "storage-provisioner" [5db18dbf-710f-4c10-84bb-c5120c865740] Running
	I0520 12:40:18.600582  874942 system_pods.go:126] duration metric: took 210.217723ms to wait for k8s-apps to be running ...
	I0520 12:40:18.600592  874942 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 12:40:18.600645  874942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:40:18.615703  874942 system_svc.go:56] duration metric: took 15.100667ms WaitForService to wait for kubelet
	I0520 12:40:18.615728  874942 kubeadm.go:576] duration metric: took 19.255867278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:40:18.615747  874942 node_conditions.go:102] verifying NodePressure condition ...
	I0520 12:40:18.787128  874942 request.go:629] Waited for 171.293819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.182:8443/api/v1/nodes
	I0520 12:40:18.787201  874942 round_trippers.go:463] GET https://192.168.39.182:8443/api/v1/nodes
	I0520 12:40:18.787207  874942 round_trippers.go:469] Request Headers:
	I0520 12:40:18.787221  874942 round_trippers.go:473]     Accept: application/json, */*
	I0520 12:40:18.787231  874942 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 12:40:18.791588  874942 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 12:40:18.792796  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:40:18.792819  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:40:18.792830  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:40:18.792833  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:40:18.792836  874942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:40:18.792839  874942 node_conditions.go:123] node cpu capacity is 2
	I0520 12:40:18.792843  874942 node_conditions.go:105] duration metric: took 177.092352ms to run NodePressure ...
	I0520 12:40:18.792855  874942 start.go:240] waiting for startup goroutines ...
	I0520 12:40:18.792875  874942 start.go:254] writing updated cluster config ...
	I0520 12:40:18.793237  874942 ssh_runner.go:195] Run: rm -f paused
	I0520 12:40:18.844454  874942 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 12:40:18.846614  874942 out.go:177] * Done! kubectl is now configured to use "ha-252263" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.006387750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209087006367419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d115449d-38a0-4d42-8967-7ef08501e270 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.007025764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a390a08-c84a-4e9f-88d3-9ee7ab37a4f3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.007100447Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a390a08-c84a-4e9f-88d3-9ee7ab37a4f3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.007328223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716208821244579996,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674333320289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674327059972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353,PodSandboxId:509e3f4d08fedc3173c18c0b94ea58929a76174d6dc95a04aefbeb74e9507e75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1716208674232176169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0,PodSandboxId:f86d5e1365cb832e0d1cc4b6bfa804f62095e06c4733bbba19ec38fb00ee97c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162086
72416030703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716208672039062123,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9,PodSandboxId:f80807f22bebc862472eb7c843cb9f208163edfe0c2103750f8f204deaf5e4f4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716208653565978132,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f00725a825c4f7424b73b648375ccaa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf,PodSandboxId:73772985d8fcca40bcbcd3e2f6305e797a90ce024f00fea03e18907ca318c200,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716208651879196138,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716208651871576671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257,PodSandboxId:530a8699d490ca93f93328170e233c12e11f2b6a5f9898775c9181c5d229518a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716208651847301509,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernete
s.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716208651760986831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a390a08-c84a-4e9f-88d3-9ee7ab37a4f3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.043047410Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c16490b9-e320-4d02-beb0-cbff208d9d79 name=/runtime.v1.RuntimeService/Version
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.043118558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c16490b9-e320-4d02-beb0-cbff208d9d79 name=/runtime.v1.RuntimeService/Version
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.044008133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce5c4845-916d-4de6-afe0-291b081f489c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.044426054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209087044405242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce5c4845-916d-4de6-afe0-291b081f489c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.044947162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77c45933-769e-40b5-9594-02de01d2e4bc name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.045004036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77c45933-769e-40b5-9594-02de01d2e4bc name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.047277387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716208821244579996,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674333320289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674327059972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353,PodSandboxId:509e3f4d08fedc3173c18c0b94ea58929a76174d6dc95a04aefbeb74e9507e75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1716208674232176169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0,PodSandboxId:f86d5e1365cb832e0d1cc4b6bfa804f62095e06c4733bbba19ec38fb00ee97c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162086
72416030703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716208672039062123,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9,PodSandboxId:f80807f22bebc862472eb7c843cb9f208163edfe0c2103750f8f204deaf5e4f4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716208653565978132,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f00725a825c4f7424b73b648375ccaa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf,PodSandboxId:73772985d8fcca40bcbcd3e2f6305e797a90ce024f00fea03e18907ca318c200,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716208651879196138,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716208651871576671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257,PodSandboxId:530a8699d490ca93f93328170e233c12e11f2b6a5f9898775c9181c5d229518a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716208651847301509,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernete
s.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716208651760986831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77c45933-769e-40b5-9594-02de01d2e4bc name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.094092002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1614c220-c238-452e-9e17-cb32952c5a18 name=/runtime.v1.RuntimeService/Version
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.094155986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1614c220-c238-452e-9e17-cb32952c5a18 name=/runtime.v1.RuntimeService/Version
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.095266596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b381d1ea-9564-4912-bba9-1fa37764a5ea name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.095777037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209087095756063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b381d1ea-9564-4912-bba9-1fa37764a5ea name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.096398287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ade594c9-8f71-4ba5-a7c3-f24bb48f5879 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.096445558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ade594c9-8f71-4ba5-a7c3-f24bb48f5879 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.096846531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716208821244579996,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674333320289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674327059972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353,PodSandboxId:509e3f4d08fedc3173c18c0b94ea58929a76174d6dc95a04aefbeb74e9507e75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1716208674232176169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0,PodSandboxId:f86d5e1365cb832e0d1cc4b6bfa804f62095e06c4733bbba19ec38fb00ee97c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162086
72416030703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716208672039062123,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9,PodSandboxId:f80807f22bebc862472eb7c843cb9f208163edfe0c2103750f8f204deaf5e4f4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716208653565978132,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f00725a825c4f7424b73b648375ccaa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf,PodSandboxId:73772985d8fcca40bcbcd3e2f6305e797a90ce024f00fea03e18907ca318c200,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716208651879196138,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716208651871576671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257,PodSandboxId:530a8699d490ca93f93328170e233c12e11f2b6a5f9898775c9181c5d229518a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716208651847301509,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernete
s.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716208651760986831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ade594c9-8f71-4ba5-a7c3-f24bb48f5879 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.131380502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f39c98d-1a87-482e-b4dd-da05be32e98d name=/runtime.v1.RuntimeService/Version
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.131446682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f39c98d-1a87-482e-b4dd-da05be32e98d name=/runtime.v1.RuntimeService/Version
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.132330170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc7e2428-22ae-45f7-9153-8b2550ef6875 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.133000324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209087132882557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc7e2428-22ae-45f7-9153-8b2550ef6875 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.133531269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=861e844a-0d1e-454f-932e-8fd01d6d73e6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.133578990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=861e844a-0d1e-454f-932e-8fd01d6d73e6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:44:47 ha-252263 crio[680]: time="2024-05-20 12:44:47.133793669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716208821244579996,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674333320289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716208674327059972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353,PodSandboxId:509e3f4d08fedc3173c18c0b94ea58929a76174d6dc95a04aefbeb74e9507e75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1716208674232176169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0,PodSandboxId:f86d5e1365cb832e0d1cc4b6bfa804f62095e06c4733bbba19ec38fb00ee97c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17162086
72416030703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716208672039062123,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9,PodSandboxId:f80807f22bebc862472eb7c843cb9f208163edfe0c2103750f8f204deaf5e4f4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716208653565978132,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f00725a825c4f7424b73b648375ccaa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf,PodSandboxId:73772985d8fcca40bcbcd3e2f6305e797a90ce024f00fea03e18907ca318c200,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716208651879196138,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716208651871576671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257,PodSandboxId:530a8699d490ca93f93328170e233c12e11f2b6a5f9898775c9181c5d229518a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716208651847301509,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernete
s.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716208651760986831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=861e844a-0d1e-454f-932e-8fd01d6d73e6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fb77a13cb639       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   e3f7317af104f       busybox-fc5497c4f-vdgxd
	0aaaa2c2d0a2a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   8217c5dc10b50       coredns-7db6d8ff4d-c2vkj
	81df7a9501142       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   43b0b303d8ecf       coredns-7db6d8ff4d-96h5w
	f4931bfff375c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   509e3f4d08fed       storage-provisioner
	0fab498e261e0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   f86d5e1365cb8       kindnet-8vkjc
	8481a0a858b8f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago       Running             kube-proxy                0                   85f3c6afc77a5       kube-proxy-z5zvt
	8e7cb9bc29277       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   f80807f22bebc       kube-vip-ha-252263
	78352b69293ae       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   73772985d8fcc       kube-apiserver-ha-252263
	8516a1fdea0a5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   e9f3670ad0515       kube-scheduler-ha-252263
	38216273b9bc6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   530a8699d490c       kube-controller-manager-ha-252263
	57b99e90b3f2c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   9dcb3183f7b71       etcd-ha-252263
	
	
	==> coredns [0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a] <==
	[INFO] 10.244.1.2:39515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230927s
	[INFO] 10.244.1.2:38792 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000064874s
	[INFO] 10.244.1.2:58037 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001770245s
	[INFO] 10.244.0.4:48741 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012283219s
	[INFO] 10.244.0.4:49128 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009738s
	[INFO] 10.244.2.2:33816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646431s
	[INFO] 10.244.2.2:35739 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000262525s
	[INFO] 10.244.2.2:38598 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158046s
	[INFO] 10.244.2.2:58591 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129009s
	[INFO] 10.244.2.2:42154 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077099s
	[INFO] 10.244.1.2:55966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236408s
	[INFO] 10.244.1.2:38116 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165417s
	[INFO] 10.244.1.2:42765 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013421s
	[INFO] 10.244.0.4:43917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087757s
	[INFO] 10.244.2.2:39196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131607s
	[INFO] 10.244.2.2:53256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139178s
	[INFO] 10.244.2.2:51674 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089462s
	[INFO] 10.244.2.2:49072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088789s
	[INFO] 10.244.1.2:56181 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013731s
	[INFO] 10.244.1.2:41238 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121064s
	[INFO] 10.244.0.4:51538 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100171s
	[INFO] 10.244.2.2:59762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112653s
	[INFO] 10.244.2.2:48400 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080614s
	[INFO] 10.244.1.2:54360 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166063s
	[INFO] 10.244.1.2:51350 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071222s
	
	
	==> coredns [81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7] <==
	[INFO] 10.244.0.4:38461 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003816118s
	[INFO] 10.244.0.4:34424 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139726s
	[INFO] 10.244.0.4:60068 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136414s
	[INFO] 10.244.0.4:60267 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175538s
	[INFO] 10.244.0.4:34444 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098358s
	[INFO] 10.244.2.2:57093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167419s
	[INFO] 10.244.2.2:33999 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001382726s
	[INFO] 10.244.2.2:34539 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090296s
	[INFO] 10.244.1.2:60979 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00180355s
	[INFO] 10.244.1.2:60301 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084438s
	[INFO] 10.244.1.2:44989 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001381049s
	[INFO] 10.244.1.2:51684 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008851s
	[INFO] 10.244.1.2:37865 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122394s
	[INFO] 10.244.0.4:41864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103464s
	[INFO] 10.244.0.4:48776 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078784s
	[INFO] 10.244.0.4:50703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060251s
	[INFO] 10.244.1.2:44802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115237s
	[INFO] 10.244.1.2:33948 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012433s
	[INFO] 10.244.0.4:54781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008753s
	[INFO] 10.244.0.4:54168 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243725s
	[INFO] 10.244.0.4:60539 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140289s
	[INFO] 10.244.2.2:37865 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093682s
	[INFO] 10.244.2.2:38339 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116317s
	[INFO] 10.244.1.2:44551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117883s
	[INFO] 10.244.1.2:42004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008187s
	
	
	==> describe nodes <==
	Name:               ha-252263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T12_37_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:44:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:40:42 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:40:42 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:40:42 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:40:42 +0000   Mon, 20 May 2024 12:37:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-252263
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 35935ea8555a4df9a418abd1fd7734ca
	  System UUID:                35935ea8-555a-4df9-a418-abd1fd7734ca
	  Boot ID:                    96326bcd-6af4-4e73-8e52-8d2d55c0ef49
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vdgxd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 coredns-7db6d8ff4d-96h5w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m56s
	  kube-system                 coredns-7db6d8ff4d-c2vkj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m56s
	  kube-system                 etcd-ha-252263                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m11s
	  kube-system                 kindnet-8vkjc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m56s
	  kube-system                 kube-apiserver-ha-252263             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-controller-manager-ha-252263    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-proxy-z5zvt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	  kube-system                 kube-scheduler-ha-252263             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-vip-ha-252263                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m54s  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m16s  kubelet          Node ha-252263 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m9s   kubelet          Node ha-252263 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m9s   kubelet          Node ha-252263 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m9s   kubelet          Node ha-252263 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m57s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal  NodeReady                6m54s  kubelet          Node ha-252263 status is now: NodeReady
	  Normal  RegisteredNode           5m46s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal  RegisteredNode           4m34s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	
	
	Name:               ha-252263-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_38_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:38:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:41:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 12:40:45 +0000   Mon, 20 May 2024 12:41:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 12:40:45 +0000   Mon, 20 May 2024 12:41:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 12:40:45 +0000   Mon, 20 May 2024 12:41:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 12:40:45 +0000   Mon, 20 May 2024 12:41:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-252263-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 39c8edfb8be441aab0eaa91516d89ad1
	  System UUID:                39c8edfb-8be4-41aa-b0ea-a91516d89ad1
	  Boot ID:                    47fc0a20-7d26-4ae4-84a6-254956052d62
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xqdrj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 etcd-ha-252263-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-lfz72                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m5s
	  kube-system                 kube-apiserver-ha-252263-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-ha-252263-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-84x7f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-scheduler-ha-252263-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-vip-ha-252263-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet          Node ha-252263-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet          Node ha-252263-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x7 over 6m5s)  kubelet          Node ha-252263-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m2s                 node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           5m46s                node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           4m34s                node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  NodeNotReady             2m49s                node-controller  Node ha-252263-m02 status is now: NodeNotReady
	
	
	Name:               ha-252263-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_39_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:39:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:44:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:40:25 +0000   Mon, 20 May 2024 12:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:40:25 +0000   Mon, 20 May 2024 12:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:40:25 +0000   Mon, 20 May 2024 12:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:40:25 +0000   Mon, 20 May 2024 12:40:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-252263-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3787355a13534f32abf4729d5f862897
	  System UUID:                3787355a-1353-4f32-abf4-729d5f862897
	  Boot ID:                    68a704e8-f575-4b7b-98a9-d727d451be92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xq6j6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 etcd-ha-252263-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m51s
	  kube-system                 kindnet-d67g2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m53s
	  kube-system                 kube-apiserver-ha-252263-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-controller-manager-ha-252263-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-proxy-c8zs5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-scheduler-ha-252263-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-vip-ha-252263-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m53s (x8 over 4m53s)  kubelet          Node ha-252263-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s (x8 over 4m53s)  kubelet          Node ha-252263-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s (x7 over 4m53s)  kubelet          Node ha-252263-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal  RegisteredNode           4m34s                  node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	
	
	Name:               ha-252263-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_40_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:40:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:44:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:41:26 +0000   Mon, 20 May 2024 12:40:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:41:26 +0000   Mon, 20 May 2024 12:40:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:41:26 +0000   Mon, 20 May 2024 12:40:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:41:26 +0000   Mon, 20 May 2024 12:41:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-252263-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e01b8d01b7b3442aafbd1460443cc06b
	  System UUID:                e01b8d01-b7b3-442a-afbd-1460443cc06b
	  Boot ID:                    14648e88-4164-483b-8f3c-95db62d2c79a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5st4d       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-proxy-gww58    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m52s (x2 over 3m52s)  kubelet          Node ha-252263-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x2 over 3m52s)  kubelet          Node ha-252263-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x2 over 3m52s)  kubelet          Node ha-252263-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal  NodeReady                3m43s                  kubelet          Node ha-252263-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May20 12:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051150] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040296] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[May20 12:37] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.429532] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.630936] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.720517] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.056941] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063479] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.182637] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.137786] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261133] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.100200] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.178110] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.059165] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.929456] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.070241] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.174384] kauditd_printk_skb: 21 callbacks suppressed
	[May20 12:38] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b] <==
	{"level":"warn","ts":"2024-05-20T12:44:47.379636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.382021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.391033Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.397253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.407883Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.418617Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.424828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.430605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.434064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.446285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.452175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.453062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.457632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.461948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.465474Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.471515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.476486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.481542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.48178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.484746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.487579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.492496Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.497478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.497665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T12:44:47.504424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"50ad4904f737d679","from":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:44:47 up 7 min,  0 users,  load average: 0.82, 0.44, 0.19
	Linux ha-252263 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0fab498e261e0004d632159f1746a2f9acd5404456b75e147447f6c0bbd77ab0] <==
	I0520 12:44:13.861521       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:44:23.876296       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:44:23.876338       1 main.go:227] handling current node
	I0520 12:44:23.876352       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:44:23.876360       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:44:23.876503       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:44:23.876536       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:44:23.876611       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:44:23.876618       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:44:33.887560       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:44:33.887677       1 main.go:227] handling current node
	I0520 12:44:33.887707       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:44:33.887804       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:44:33.888028       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:44:33.888080       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:44:33.888188       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:44:33.888240       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:44:43.902403       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:44:43.902445       1 main.go:227] handling current node
	I0520 12:44:43.902455       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:44:43.902461       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:44:43.902552       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:44:43.902574       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:44:43.902617       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:44:43.902639       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf] <==
	I0520 12:37:38.031320       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:37:38.064537       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:37:38.076969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:37:51.151473       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0520 12:37:51.372610       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0520 12:40:22.331370       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34392: use of closed network connection
	E0520 12:40:22.526129       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34410: use of closed network connection
	E0520 12:40:22.730180       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34432: use of closed network connection
	E0520 12:40:22.941054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34448: use of closed network connection
	E0520 12:40:23.117192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34466: use of closed network connection
	E0520 12:40:23.311700       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34480: use of closed network connection
	E0520 12:40:23.487778       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34496: use of closed network connection
	E0520 12:40:23.672103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34522: use of closed network connection
	E0520 12:40:23.843373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55618: use of closed network connection
	E0520 12:40:24.128795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55652: use of closed network connection
	E0520 12:40:24.307485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55666: use of closed network connection
	E0520 12:40:24.510106       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55686: use of closed network connection
	E0520 12:40:24.691589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55704: use of closed network connection
	E0520 12:40:24.875368       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55718: use of closed network connection
	I0520 12:40:58.254401       1 trace.go:236] Trace[229593480]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c35a0634-6027-4012-9c09-76f1c2392ff2,client:192.168.39.41,api-group:,api-version:v1,name:kindnet-mvk7f,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-mvk7f/status,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PATCH (20-May-2024 12:40:57.743) (total time: 510ms):
	Trace[229593480]: ["GuaranteedUpdate etcd3" audit-id:c35a0634-6027-4012-9c09-76f1c2392ff2,key:/pods/kube-system/kindnet-mvk7f,type:*core.Pod,resource:pods 510ms (12:40:57.743)
	Trace[229593480]:  ---"Txn call completed" 501ms (12:40:58.253)]
	Trace[229593480]: ---"Object stored in database" 502ms (12:40:58.254)
	Trace[229593480]: [510.739074ms] [510.739074ms] END
	W0520 12:41:36.827204       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.60]
	
	
	==> kube-controller-manager [38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257] <==
	I0520 12:38:42.834013       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-252263-m02" podCIDRs=["10.244.1.0/24"]
	I0520 12:38:45.472654       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-252263-m02"
	I0520 12:39:54.524099       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-252263-m03\" does not exist"
	I0520 12:39:54.541790       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-252263-m03" podCIDRs=["10.244.2.0/24"]
	I0520 12:39:55.500611       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-252263-m03"
	I0520 12:40:19.785836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.376263ms"
	I0520 12:40:19.809009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.105371ms"
	I0520 12:40:19.935707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.925972ms"
	I0520 12:40:20.069851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.273115ms"
	I0520 12:40:20.091563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.614626ms"
	I0520 12:40:20.091659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.374µs"
	I0520 12:40:21.600932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.072792ms"
	I0520 12:40:21.601384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="195.493µs"
	I0520 12:40:21.654564       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.506086ms"
	I0520 12:40:21.654663       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.425µs"
	I0520 12:40:21.830316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.870985ms"
	I0520 12:40:21.830724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.665µs"
	E0520 12:40:55.382232       1 certificate_controller.go:146] Sync csr-n29hv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-n29hv": the object has been modified; please apply your changes to the latest version and try again
	I0520 12:40:55.653169       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-252263-m04\" does not exist"
	I0520 12:40:55.697169       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-252263-m04" podCIDRs=["10.244.3.0/24"]
	I0520 12:41:00.542594       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-252263-m04"
	I0520 12:41:04.319859       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-252263-m04"
	I0520 12:41:58.868588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-252263-m04"
	I0520 12:41:59.118680       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.017617ms"
	I0520 12:41:59.118794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.429µs"
	
	
	==> kube-proxy [8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e] <==
	I0520 12:37:52.284889       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:37:52.315219       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.182"]
	I0520 12:37:52.419934       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:37:52.419972       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:37:52.419990       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:37:52.428614       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:37:52.428936       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:37:52.428984       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:37:52.430685       1 config.go:192] "Starting service config controller"
	I0520 12:37:52.430728       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:37:52.430756       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:37:52.430776       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:37:52.431144       1 config.go:319] "Starting node config controller"
	I0520 12:37:52.431170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:37:52.531313       1 shared_informer.go:320] Caches are synced for node config
	I0520 12:37:52.531363       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:37:52.531436       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290] <==
	W0520 12:37:36.327547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:37:36.327607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:37:36.412567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:37:36.412614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0520 12:37:38.071521       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 12:39:54.603992       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-d67g2\": pod kindnet-d67g2 is already assigned to node \"ha-252263-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-d67g2" node="ha-252263-m03"
	E0520 12:39:54.604172       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a66b7178-4b9d-4958-898b-37ff6350432a(kube-system/kindnet-d67g2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-d67g2"
	E0520 12:39:54.604251       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-d67g2\": pod kindnet-d67g2 is already assigned to node \"ha-252263-m03\"" pod="kube-system/kindnet-d67g2"
	I0520 12:39:54.604321       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-d67g2" node="ha-252263-m03"
	E0520 12:39:54.603992       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-c8zs5\": pod kube-proxy-c8zs5 is already assigned to node \"ha-252263-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-c8zs5" node="ha-252263-m03"
	E0520 12:39:54.607016       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0a2ddd4c-b435-4bd5-9a31-16f8ea676656(kube-system/kube-proxy-c8zs5) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-c8zs5"
	E0520 12:39:54.607037       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-c8zs5\": pod kube-proxy-c8zs5 is already assigned to node \"ha-252263-m03\"" pod="kube-system/kube-proxy-c8zs5"
	I0520 12:39:54.607168       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-c8zs5" node="ha-252263-m03"
	E0520 12:40:19.800258       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vdgxd\": pod busybox-fc5497c4f-vdgxd is already assigned to node \"ha-252263\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-vdgxd" node="ha-252263"
	E0520 12:40:19.800346       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 57097c7d-bdee-48f4-8736-264f6cfaee92(default/busybox-fc5497c4f-vdgxd) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-vdgxd"
	E0520 12:40:19.800369       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vdgxd\": pod busybox-fc5497c4f-vdgxd is already assigned to node \"ha-252263\"" pod="default/busybox-fc5497c4f-vdgxd"
	I0520 12:40:19.800390       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-vdgxd" node="ha-252263"
	E0520 12:40:55.749329       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ptnbj\": pod kube-proxy-ptnbj is already assigned to node \"ha-252263-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ptnbj" node="ha-252263-m04"
	E0520 12:40:55.749418       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c6ae22ff-6dcd-43cb-9342-f5348f67d3a3(kube-system/kube-proxy-ptnbj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ptnbj"
	E0520 12:40:55.749435       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ptnbj\": pod kube-proxy-ptnbj is already assigned to node \"ha-252263-m04\"" pod="kube-system/kube-proxy-ptnbj"
	I0520 12:40:55.749459       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ptnbj" node="ha-252263-m04"
	E0520 12:40:55.756695       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l25xs\": pod kindnet-l25xs is already assigned to node \"ha-252263-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-l25xs" node="ha-252263-m04"
	E0520 12:40:55.759149       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0239aeff-36c5-438b-ada6-a3f56a4f5efa(kube-system/kindnet-l25xs) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-l25xs"
	E0520 12:40:55.759239       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l25xs\": pod kindnet-l25xs is already assigned to node \"ha-252263-m04\"" pod="kube-system/kindnet-l25xs"
	I0520 12:40:55.759288       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-l25xs" node="ha-252263-m04"
	
	
	==> kubelet <==
	May 20 12:40:37 ha-252263 kubelet[1370]: E0520 12:40:37.947632    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:40:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:40:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:40:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:40:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:41:37 ha-252263 kubelet[1370]: E0520 12:41:37.946150    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:41:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:41:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:41:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:41:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:42:37 ha-252263 kubelet[1370]: E0520 12:42:37.946562    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:42:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:42:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:42:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:42:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:43:37 ha-252263 kubelet[1370]: E0520 12:43:37.951104    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:43:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:43:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:43:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:43:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:44:37 ha-252263 kubelet[1370]: E0520 12:44:37.944446    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:44:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:44:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:44:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:44:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-252263 -n ha-252263
helpers_test.go:261: (dbg) Run:  kubectl --context ha-252263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (381.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-252263 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-252263 -v=7 --alsologtostderr
E0520 12:46:10.516098  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:46:38.201949  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-252263 -v=7 --alsologtostderr: exit status 82 (2m1.902750837s)

                                                
                                                
-- stdout --
	* Stopping node "ha-252263-m04"  ...
	* Stopping node "ha-252263-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:44:48.987597  880713 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:44:48.987852  880713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:48.987860  880713 out.go:304] Setting ErrFile to fd 2...
	I0520 12:44:48.987864  880713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:44:48.988049  880713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:44:48.988264  880713 out.go:298] Setting JSON to false
	I0520 12:44:48.988346  880713 mustload.go:65] Loading cluster: ha-252263
	I0520 12:44:48.988672  880713 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:44:48.988765  880713 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:44:48.988939  880713 mustload.go:65] Loading cluster: ha-252263
	I0520 12:44:48.989077  880713 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:44:48.989119  880713 stop.go:39] StopHost: ha-252263-m04
	I0520 12:44:48.989490  880713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:48.989549  880713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:49.005106  880713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0520 12:44:49.005553  880713 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:49.006112  880713 main.go:141] libmachine: Using API Version  1
	I0520 12:44:49.006145  880713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:49.006557  880713 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:49.009896  880713 out.go:177] * Stopping node "ha-252263-m04"  ...
	I0520 12:44:49.011282  880713 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 12:44:49.011309  880713 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:44:49.011550  880713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 12:44:49.011582  880713 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:44:49.014288  880713 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:49.014648  880713 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:40:40 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:44:49.014671  880713 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:44:49.014826  880713 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:44:49.014969  880713 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:44:49.015071  880713 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:44:49.015222  880713 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:44:49.097229  880713 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 12:44:49.150808  880713 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 12:44:49.204998  880713 main.go:141] libmachine: Stopping "ha-252263-m04"...
	I0520 12:44:49.205042  880713 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:44:49.206639  880713 main.go:141] libmachine: (ha-252263-m04) Calling .Stop
	I0520 12:44:49.210204  880713 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 0/120
	I0520 12:44:50.426323  880713 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:44:50.427667  880713 main.go:141] libmachine: Machine "ha-252263-m04" was stopped.
	I0520 12:44:50.427686  880713 stop.go:75] duration metric: took 1.416406564s to stop
	I0520 12:44:50.427706  880713 stop.go:39] StopHost: ha-252263-m03
	I0520 12:44:50.427988  880713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:44:50.428028  880713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:44:50.444277  880713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I0520 12:44:50.444741  880713 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:44:50.445306  880713 main.go:141] libmachine: Using API Version  1
	I0520 12:44:50.445334  880713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:44:50.445688  880713 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:44:50.448867  880713 out.go:177] * Stopping node "ha-252263-m03"  ...
	I0520 12:44:50.450107  880713 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 12:44:50.450132  880713 main.go:141] libmachine: (ha-252263-m03) Calling .DriverName
	I0520 12:44:50.450383  880713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 12:44:50.450418  880713 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHHostname
	I0520 12:44:50.453507  880713 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:50.454049  880713 main.go:141] libmachine: (ha-252263-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:d8:f8", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:39:18 +0000 UTC Type:0 Mac:52:54:00:98:d8:f8 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-252263-m03 Clientid:01:52:54:00:98:d8:f8}
	I0520 12:44:50.454086  880713 main.go:141] libmachine: (ha-252263-m03) DBG | domain ha-252263-m03 has defined IP address 192.168.39.60 and MAC address 52:54:00:98:d8:f8 in network mk-ha-252263
	I0520 12:44:50.454231  880713 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHPort
	I0520 12:44:50.454419  880713 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHKeyPath
	I0520 12:44:50.454597  880713 main.go:141] libmachine: (ha-252263-m03) Calling .GetSSHUsername
	I0520 12:44:50.454747  880713 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m03/id_rsa Username:docker}
	I0520 12:44:50.538110  880713 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 12:44:50.591002  880713 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 12:44:50.644892  880713 main.go:141] libmachine: Stopping "ha-252263-m03"...
	I0520 12:44:50.644916  880713 main.go:141] libmachine: (ha-252263-m03) Calling .GetState
	I0520 12:44:50.646457  880713 main.go:141] libmachine: (ha-252263-m03) Calling .Stop
	I0520 12:44:50.649893  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 0/120
	I0520 12:44:51.651254  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 1/120
	I0520 12:44:52.653369  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 2/120
	I0520 12:44:53.654726  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 3/120
	I0520 12:44:54.656080  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 4/120
	I0520 12:44:55.657821  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 5/120
	I0520 12:44:56.659422  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 6/120
	I0520 12:44:57.661507  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 7/120
	I0520 12:44:58.663700  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 8/120
	I0520 12:44:59.665188  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 9/120
	I0520 12:45:00.667434  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 10/120
	I0520 12:45:01.669286  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 11/120
	I0520 12:45:02.670875  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 12/120
	I0520 12:45:03.673429  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 13/120
	I0520 12:45:04.674803  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 14/120
	I0520 12:45:05.676605  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 15/120
	I0520 12:45:06.678080  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 16/120
	I0520 12:45:07.680126  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 17/120
	I0520 12:45:08.681550  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 18/120
	I0520 12:45:09.682943  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 19/120
	I0520 12:45:10.685391  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 20/120
	I0520 12:45:11.686736  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 21/120
	I0520 12:45:12.688397  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 22/120
	I0520 12:45:13.689790  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 23/120
	I0520 12:45:14.691277  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 24/120
	I0520 12:45:15.693608  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 25/120
	I0520 12:45:16.695153  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 26/120
	I0520 12:45:17.696602  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 27/120
	I0520 12:45:18.698113  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 28/120
	I0520 12:45:19.699542  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 29/120
	I0520 12:45:20.701425  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 30/120
	I0520 12:45:21.703132  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 31/120
	I0520 12:45:22.705628  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 32/120
	I0520 12:45:23.706884  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 33/120
	I0520 12:45:24.708392  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 34/120
	I0520 12:45:25.710280  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 35/120
	I0520 12:45:26.711688  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 36/120
	I0520 12:45:27.713013  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 37/120
	I0520 12:45:28.714309  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 38/120
	I0520 12:45:29.715633  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 39/120
	I0520 12:45:30.717168  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 40/120
	I0520 12:45:31.719108  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 41/120
	I0520 12:45:32.721094  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 42/120
	I0520 12:45:33.722481  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 43/120
	I0520 12:45:34.723829  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 44/120
	I0520 12:45:35.725999  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 45/120
	I0520 12:45:36.727537  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 46/120
	I0520 12:45:37.729222  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 47/120
	I0520 12:45:38.730550  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 48/120
	I0520 12:45:39.731842  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 49/120
	I0520 12:45:40.733810  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 50/120
	I0520 12:45:41.734954  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 51/120
	I0520 12:45:42.736464  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 52/120
	I0520 12:45:43.737928  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 53/120
	I0520 12:45:44.739219  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 54/120
	I0520 12:45:45.741131  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 55/120
	I0520 12:45:46.742417  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 56/120
	I0520 12:45:47.743832  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 57/120
	I0520 12:45:48.745022  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 58/120
	I0520 12:45:49.746338  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 59/120
	I0520 12:45:50.748271  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 60/120
	I0520 12:45:51.749589  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 61/120
	I0520 12:45:52.750949  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 62/120
	I0520 12:45:53.752244  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 63/120
	I0520 12:45:54.753518  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 64/120
	I0520 12:45:55.755378  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 65/120
	I0520 12:45:56.757196  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 66/120
	I0520 12:45:57.758418  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 67/120
	I0520 12:45:58.759721  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 68/120
	I0520 12:45:59.761127  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 69/120
	I0520 12:46:00.762897  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 70/120
	I0520 12:46:01.764391  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 71/120
	I0520 12:46:02.765787  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 72/120
	I0520 12:46:03.767135  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 73/120
	I0520 12:46:04.768431  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 74/120
	I0520 12:46:05.770614  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 75/120
	I0520 12:46:06.771982  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 76/120
	I0520 12:46:07.773561  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 77/120
	I0520 12:46:08.775005  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 78/120
	I0520 12:46:09.776389  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 79/120
	I0520 12:46:10.777975  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 80/120
	I0520 12:46:11.779230  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 81/120
	I0520 12:46:12.780499  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 82/120
	I0520 12:46:13.782007  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 83/120
	I0520 12:46:14.783402  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 84/120
	I0520 12:46:15.785203  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 85/120
	I0520 12:46:16.786520  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 86/120
	I0520 12:46:17.787737  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 87/120
	I0520 12:46:18.789117  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 88/120
	I0520 12:46:19.790521  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 89/120
	I0520 12:46:20.792375  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 90/120
	I0520 12:46:21.793783  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 91/120
	I0520 12:46:22.795876  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 92/120
	I0520 12:46:23.797195  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 93/120
	I0520 12:46:24.798413  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 94/120
	I0520 12:46:25.800067  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 95/120
	I0520 12:46:26.801410  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 96/120
	I0520 12:46:27.802624  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 97/120
	I0520 12:46:28.803971  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 98/120
	I0520 12:46:29.805236  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 99/120
	I0520 12:46:30.806799  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 100/120
	I0520 12:46:31.808069  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 101/120
	I0520 12:46:32.809373  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 102/120
	I0520 12:46:33.810809  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 103/120
	I0520 12:46:34.812117  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 104/120
	I0520 12:46:35.813563  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 105/120
	I0520 12:46:36.814930  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 106/120
	I0520 12:46:37.816186  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 107/120
	I0520 12:46:38.817506  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 108/120
	I0520 12:46:39.818930  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 109/120
	I0520 12:46:40.820433  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 110/120
	I0520 12:46:41.822647  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 111/120
	I0520 12:46:42.824187  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 112/120
	I0520 12:46:43.825551  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 113/120
	I0520 12:46:44.827081  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 114/120
	I0520 12:46:45.829813  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 115/120
	I0520 12:46:46.831220  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 116/120
	I0520 12:46:47.832677  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 117/120
	I0520 12:46:48.834007  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 118/120
	I0520 12:46:49.835425  880713 main.go:141] libmachine: (ha-252263-m03) Waiting for machine to stop 119/120
	I0520 12:46:50.836277  880713 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 12:46:50.836349  880713 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 12:46:50.838476  880713 out.go:177] 
	W0520 12:46:50.840047  880713 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 12:46:50.840067  880713 out.go:239] * 
	* 
	W0520 12:46:50.843978  880713 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 12:46:50.845684  880713 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-252263 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-252263 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-252263 --wait=true -v=7 --alsologtostderr: (4m17.515604272s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-252263
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-252263 -n ha-252263
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-252263 logs -n 25: (1.725647174s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m02:/home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m02 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04:/home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m04 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp testdata/cp-test.txt                                               | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263:/home/docker/cp-test_ha-252263-m04_ha-252263.txt                      |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263 sudo cat                                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263.txt                                |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m02:/home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m02 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03:/home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m03 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-252263 node stop m02 -v=7                                                    | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-252263 node start m02 -v=7                                                   | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-252263 -v=7                                                          | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-252263 -v=7                                                               | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-252263 --wait=true -v=7                                                   | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:46 UTC | 20 May 24 12:51 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-252263                                                               | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:51 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:46:50
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:46:50.893960  881185 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:46:50.894231  881185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:46:50.894241  881185 out.go:304] Setting ErrFile to fd 2...
	I0520 12:46:50.894245  881185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:46:50.894436  881185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:46:50.894994  881185 out.go:298] Setting JSON to false
	I0520 12:46:50.895949  881185 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8959,"bootTime":1716200252,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:46:50.896013  881185 start.go:139] virtualization: kvm guest
	I0520 12:46:50.898520  881185 out.go:177] * [ha-252263] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:46:50.900363  881185 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 12:46:50.900394  881185 notify.go:220] Checking for updates...
	I0520 12:46:50.902005  881185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:46:50.903776  881185 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:46:50.905142  881185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:46:50.906396  881185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:46:50.907765  881185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:46:50.909421  881185 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:46:50.909516  881185 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:46:50.910005  881185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:46:50.910090  881185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:46:50.925867  881185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0520 12:46:50.926373  881185 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:46:50.927010  881185 main.go:141] libmachine: Using API Version  1
	I0520 12:46:50.927034  881185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:46:50.927393  881185 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:46:50.927589  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:46:50.962647  881185 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 12:46:50.963848  881185 start.go:297] selected driver: kvm2
	I0520 12:46:50.963875  881185 start.go:901] validating driver "kvm2" against &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:46:50.964080  881185 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:46:50.964427  881185 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:46:50.964507  881185 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:46:50.979394  881185 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:46:50.980093  881185 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:46:50.980124  881185 cni.go:84] Creating CNI manager for ""
	I0520 12:46:50.980132  881185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 12:46:50.980201  881185 start.go:340] cluster config:
	{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:46:50.980381  881185 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:46:50.982109  881185 out.go:177] * Starting "ha-252263" primary control-plane node in "ha-252263" cluster
	I0520 12:46:50.983459  881185 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:46:50.983495  881185 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:46:50.983504  881185 cache.go:56] Caching tarball of preloaded images
	I0520 12:46:50.983587  881185 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:46:50.983600  881185 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:46:50.983812  881185 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:46:50.984087  881185 start.go:360] acquireMachinesLock for ha-252263: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:46:50.984136  881185 start.go:364] duration metric: took 26.306µs to acquireMachinesLock for "ha-252263"
	I0520 12:46:50.984152  881185 start.go:96] Skipping create...Using existing machine configuration
	I0520 12:46:50.984165  881185 fix.go:54] fixHost starting: 
	I0520 12:46:50.984443  881185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:46:50.984476  881185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:46:50.998399  881185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I0520 12:46:50.998774  881185 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:46:50.999309  881185 main.go:141] libmachine: Using API Version  1
	I0520 12:46:50.999328  881185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:46:50.999634  881185 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:46:50.999802  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:46:50.999937  881185 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:46:51.001405  881185 fix.go:112] recreateIfNeeded on ha-252263: state=Running err=<nil>
	W0520 12:46:51.001427  881185 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 12:46:51.003696  881185 out.go:177] * Updating the running kvm2 "ha-252263" VM ...
	I0520 12:46:51.005091  881185 machine.go:94] provisionDockerMachine start ...
	I0520 12:46:51.005110  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:46:51.005344  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.007809  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.008385  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.008412  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.008564  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.008724  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.008820  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.008967  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.009106  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:46:51.009290  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:46:51.009301  881185 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 12:46:51.123685  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263
	
	I0520 12:46:51.123716  881185 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:46:51.123935  881185 buildroot.go:166] provisioning hostname "ha-252263"
	I0520 12:46:51.123964  881185 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:46:51.124203  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.127095  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.127471  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.127498  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.127673  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.127840  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.128016  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.128173  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.128392  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:46:51.128568  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:46:51.128586  881185 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-252263 && echo "ha-252263" | sudo tee /etc/hostname
	I0520 12:46:51.250579  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263
	
	I0520 12:46:51.250603  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.253500  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.253972  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.254009  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.254185  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.254363  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.254577  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.254710  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.254909  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:46:51.255132  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:46:51.255154  881185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-252263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-252263/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-252263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:46:51.363737  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:46:51.363767  881185 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 12:46:51.363806  881185 buildroot.go:174] setting up certificates
	I0520 12:46:51.363816  881185 provision.go:84] configureAuth start
	I0520 12:46:51.363865  881185 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:46:51.364162  881185 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:46:51.366963  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.367301  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.367327  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.367436  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.369766  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.370131  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.370156  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.370279  881185 provision.go:143] copyHostCerts
	I0520 12:46:51.370305  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:46:51.370348  881185 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 12:46:51.370368  881185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:46:51.370435  881185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 12:46:51.370565  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:46:51.370587  881185 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 12:46:51.370591  881185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:46:51.370620  881185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 12:46:51.370677  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:46:51.370693  881185 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 12:46:51.370699  881185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:46:51.370720  881185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 12:46:51.370787  881185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.ha-252263 san=[127.0.0.1 192.168.39.182 ha-252263 localhost minikube]
	I0520 12:46:51.497594  881185 provision.go:177] copyRemoteCerts
	I0520 12:46:51.497663  881185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:46:51.497691  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.500317  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.500656  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.500681  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.500893  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.501100  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.501278  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.501403  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:46:51.586467  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 12:46:51.586538  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 12:46:51.616566  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 12:46:51.616623  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:46:51.648013  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 12:46:51.648074  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 12:46:51.676986  881185 provision.go:87] duration metric: took 313.153584ms to configureAuth
	I0520 12:46:51.677008  881185 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:46:51.677248  881185 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:46:51.677346  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.680031  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.680384  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.680407  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.680580  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.680785  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.680947  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.681105  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.681282  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:46:51.681494  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:46:51.681520  881185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:48:22.583665  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:48:22.583709  881185 machine.go:97] duration metric: took 1m31.578602067s to provisionDockerMachine
	I0520 12:48:22.583731  881185 start.go:293] postStartSetup for "ha-252263" (driver="kvm2")
	I0520 12:48:22.583745  881185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:48:22.583778  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.584140  881185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:48:22.584173  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.587762  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.588226  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.588253  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.588442  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.588653  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.588833  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.588969  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:48:22.674629  881185 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:48:22.679009  881185 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:48:22.679041  881185 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 12:48:22.679135  881185 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 12:48:22.679225  881185 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 12:48:22.679249  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 12:48:22.679333  881185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:48:22.689115  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:48:22.712800  881185 start.go:296] duration metric: took 129.05594ms for postStartSetup
	I0520 12:48:22.712847  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.713161  881185 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0520 12:48:22.713197  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.715956  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.716318  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.716342  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.716553  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.716767  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.716953  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.717124  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	W0520 12:48:22.801470  881185 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0520 12:48:22.801497  881185 fix.go:56] duration metric: took 1m31.81733513s for fixHost
	I0520 12:48:22.801521  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.804311  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.804783  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.804813  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.804956  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.805132  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.805268  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.805473  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.805614  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:48:22.805775  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:48:22.805785  881185 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:48:22.911486  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716209302.877814695
	
	I0520 12:48:22.911505  881185 fix.go:216] guest clock: 1716209302.877814695
	I0520 12:48:22.911512  881185 fix.go:229] Guest: 2024-05-20 12:48:22.877814695 +0000 UTC Remote: 2024-05-20 12:48:22.801504925 +0000 UTC m=+91.944301839 (delta=76.30977ms)
	I0520 12:48:22.911558  881185 fix.go:200] guest clock delta is within tolerance: 76.30977ms
	I0520 12:48:22.911564  881185 start.go:83] releasing machines lock for "ha-252263", held for 1m31.92742038s
	I0520 12:48:22.911584  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.911886  881185 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:48:22.914654  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.915033  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.915075  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.915196  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.915645  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.915810  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.915893  881185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:48:22.915963  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.915977  881185 ssh_runner.go:195] Run: cat /version.json
	I0520 12:48:22.916002  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.918410  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.918687  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.918790  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.918814  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.918958  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.919118  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.919139  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.919147  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.919333  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.919357  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.919543  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:48:22.919559  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.919689  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.919825  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	W0520 12:48:23.025947  881185 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 12:48:23.026050  881185 ssh_runner.go:195] Run: systemctl --version
	I0520 12:48:23.032371  881185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:48:23.208602  881185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:48:23.217077  881185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:48:23.217137  881185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:48:23.226786  881185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 12:48:23.226803  881185 start.go:494] detecting cgroup driver to use...
	I0520 12:48:23.226880  881185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:48:23.246025  881185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:48:23.259750  881185 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:48:23.259796  881185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:48:23.274444  881185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:48:23.288860  881185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:48:23.453741  881185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:48:23.598333  881185 docker.go:233] disabling docker service ...
	I0520 12:48:23.598416  881185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:48:23.613865  881185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:48:23.627210  881185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:48:23.770585  881185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:48:23.919477  881185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:48:23.933943  881185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:48:23.953873  881185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:48:23.953937  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:23.964288  881185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:48:23.964356  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:23.974488  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:23.984368  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:23.994632  881185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:48:24.005240  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:24.015670  881185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:24.026882  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:24.037218  881185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:48:24.046540  881185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:48:24.055797  881185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:48:24.199072  881185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:48:28.943836  881185 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.744727686s)
	I0520 12:48:28.943867  881185 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:48:28.943919  881185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:48:28.949096  881185 start.go:562] Will wait 60s for crictl version
	I0520 12:48:28.949159  881185 ssh_runner.go:195] Run: which crictl
	I0520 12:48:28.953257  881185 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:48:28.994397  881185 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:48:28.994486  881185 ssh_runner.go:195] Run: crio --version
	I0520 12:48:29.024547  881185 ssh_runner.go:195] Run: crio --version
	I0520 12:48:29.054964  881185 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:48:29.056449  881185 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:48:29.059069  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:29.059513  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:29.059539  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:29.059715  881185 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:48:29.064789  881185 kubeadm.go:877] updating cluster {Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 12:48:29.064930  881185 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:48:29.064975  881185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:48:29.106059  881185 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:48:29.106086  881185 crio.go:433] Images already preloaded, skipping extraction
	I0520 12:48:29.106135  881185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:48:29.138232  881185 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:48:29.138259  881185 cache_images.go:84] Images are preloaded, skipping loading
	I0520 12:48:29.138271  881185 kubeadm.go:928] updating node { 192.168.39.182 8443 v1.30.1 crio true true} ...
	I0520 12:48:29.138437  881185 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-252263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:48:29.138523  881185 ssh_runner.go:195] Run: crio config
	I0520 12:48:29.192092  881185 cni.go:84] Creating CNI manager for ""
	I0520 12:48:29.192112  881185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 12:48:29.192134  881185 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 12:48:29.192157  881185 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-252263 NodeName:ha-252263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 12:48:29.192379  881185 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-252263"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 12:48:29.192407  881185 kube-vip.go:115] generating kube-vip config ...
	I0520 12:48:29.192457  881185 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 12:48:29.203947  881185 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 12:48:29.204069  881185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 12:48:29.204130  881185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:48:29.213329  881185 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 12:48:29.213394  881185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 12:48:29.222198  881185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 12:48:29.238445  881185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:48:29.254900  881185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 12:48:29.271051  881185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 12:48:29.287060  881185 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 12:48:29.292041  881185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:48:29.440333  881185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:48:29.456304  881185 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263 for IP: 192.168.39.182
	I0520 12:48:29.456330  881185 certs.go:194] generating shared ca certs ...
	I0520 12:48:29.456347  881185 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:48:29.456516  881185 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 12:48:29.456558  881185 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 12:48:29.456567  881185 certs.go:256] generating profile certs ...
	I0520 12:48:29.456645  881185 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key
	I0520 12:48:29.456671  881185 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.65e505f9
	I0520 12:48:29.456686  881185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.65e505f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182 192.168.39.22 192.168.39.60 192.168.39.254]
	I0520 12:48:29.578478  881185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.65e505f9 ...
	I0520 12:48:29.578511  881185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.65e505f9: {Name:mk4a184bdb7fba968ea974df92ad467368b653b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:48:29.578706  881185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.65e505f9 ...
	I0520 12:48:29.578725  881185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.65e505f9: {Name:mkec6a6258c44021fe39dc047dee8a55418c7ba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:48:29.578822  881185 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.65e505f9 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt
	I0520 12:48:29.579023  881185 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.65e505f9 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key
	I0520 12:48:29.579171  881185 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key
	I0520 12:48:29.579188  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 12:48:29.579199  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 12:48:29.579209  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 12:48:29.579219  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 12:48:29.579232  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 12:48:29.579242  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 12:48:29.579254  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 12:48:29.579267  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 12:48:29.579309  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 12:48:29.579347  881185 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 12:48:29.579356  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 12:48:29.579375  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 12:48:29.579395  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:48:29.579414  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 12:48:29.579449  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:48:29.579475  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 12:48:29.579489  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 12:48:29.579503  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:48:29.580076  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:48:29.605586  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:48:29.629616  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:48:29.653697  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 12:48:29.676393  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 12:48:29.699916  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:48:29.723056  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:48:29.745949  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:48:29.772023  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 12:48:29.794457  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 12:48:29.817502  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:48:29.841298  881185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 12:48:29.857576  881185 ssh_runner.go:195] Run: openssl version
	I0520 12:48:29.863454  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 12:48:29.874264  881185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 12:48:29.878949  881185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 12:48:29.878988  881185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 12:48:29.884807  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 12:48:29.893765  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 12:48:29.904091  881185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 12:48:29.908416  881185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 12:48:29.908455  881185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 12:48:29.914048  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 12:48:29.923246  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:48:29.933467  881185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:48:29.937820  881185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:48:29.937862  881185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:48:29.943414  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:48:29.952311  881185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:48:29.957009  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 12:48:29.962677  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 12:48:29.968138  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 12:48:29.973525  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 12:48:29.978870  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 12:48:29.984636  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 12:48:29.990202  881185 kubeadm.go:391] StartCluster: {Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:48:29.990304  881185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 12:48:29.990340  881185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 12:48:30.029695  881185 cri.go:89] found id: "d7a6fad8a75b9788b12befa54d77c18ee2510c7bad67a78381dfb14bfb61654c"
	I0520 12:48:30.029724  881185 cri.go:89] found id: "a7b500ca4a5ff4b26d4c450d219ed21171a47cd6937d3a9c5cee7c7c90214ff2"
	I0520 12:48:30.029730  881185 cri.go:89] found id: "8d83655ac29f13b80a76832615408f83141c2476915f4ce562a635f00c84b477"
	I0520 12:48:30.029737  881185 cri.go:89] found id: "49c278c418300797d23288d9dcf4ec027b7fa754b2869d4a360d8e196c2fcc5e"
	I0520 12:48:30.029741  881185 cri.go:89] found id: "0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a"
	I0520 12:48:30.029746  881185 cri.go:89] found id: "81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7"
	I0520 12:48:30.029750  881185 cri.go:89] found id: "f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353"
	I0520 12:48:30.029753  881185 cri.go:89] found id: "8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e"
	I0520 12:48:30.029757  881185 cri.go:89] found id: "8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9"
	I0520 12:48:30.029778  881185 cri.go:89] found id: "78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf"
	I0520 12:48:30.029788  881185 cri.go:89] found id: "8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290"
	I0520 12:48:30.029792  881185 cri.go:89] found id: "38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257"
	I0520 12:48:30.029797  881185 cri.go:89] found id: "57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b"
	I0520 12:48:30.029804  881185 cri.go:89] found id: ""
	I0520 12:48:30.029858  881185 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.107717473Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7faa6c9-c237-49ba-9488-6fe3acc519c9 name=/runtime.v1.RuntimeService/Version
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.109127845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86e6abd2-8c5a-4aa0-821d-ff18d5376d4a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.109776901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209469109751368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86e6abd2-8c5a-4aa0-821d-ff18d5376d4a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.110334382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7511bd4-5cde-48da-b487-2e69573d84e3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.110409866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7511bd4-5cde-48da-b487-2e69573d84e3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.111045045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55688fae5ad571a2951d009a710fdd76606eed7d23f1a4d34088028e5cdfa8a4,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209397950618168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716209390940067511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209359940574305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209357950417352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716209355943478973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129f71aae7a20f88722fa5b23d17d7c8c5e42a6c5f7285856acf009dcaed3577,PodSandboxId:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716209349209494114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76eb61ab14b8302988f449747a7e1c3b0dd0b1e09b0b53dbb5a79a84aa238731,PodSandboxId:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716209330142138803,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127,PodSandboxId:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209316309194383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a2e85d5
f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e,PodSandboxId:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316010461868,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f,PodSandboxId:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316001562993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792,PodSandboxId:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209315833466317,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2,PodSandboxId:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209315774510078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9
f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716209315704203583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789
145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716209315717630530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716209310598131705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kuber
netes.container.hash: 195c0558,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716208821244678391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kuberne
tes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674333448654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674327197782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716208672039078656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716208651871813725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716208651761199232,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7511bd4-5cde-48da-b487-2e69573d84e3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.118120835Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8d48f1a6-c0d9-4b74-9019-cadfd6d24cac name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.118555816Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-vdgxd,Uid:57097c7d-bdee-48f4-8736-264f6cfaee92,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209349076736477,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:40:19.773809030Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-252263,Uid:ab810d379e9444cc018c95e07377fd96,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1716209330031846849,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{kubernetes.io/config.hash: ab810d379e9444cc018c95e07377fd96,kubernetes.io/config.seen: 2024-05-20T12:48:29.255482282Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-c2vkj,Uid:a5fa83f0-abaa-4c78-8d08-124503934fb1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209315460804953,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05
-20T12:37:53.721563927Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-96h5w,Uid:3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209315408182043,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:37:53.714497397Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5db18dbf-710f-4c10-84bb-c5120c865740,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209315356327545,Labels:map[string]string
{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/confi
g.seen: 2024-05-20T12:37:53.720646165Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-252263,Uid:140ef0230d166f054d4e1035bde09336,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209315353049154,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 140ef0230d166f054d4e1035bde09336,kubernetes.io/config.seen: 2024-05-20T12:37:37.883365018Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&PodSandboxMetadata{Name:etcd-ha-252263,Uid:c625499e3affdd6ad46b9f9df2e2d950,Namespace:kube-system,Attempt:1,},State:SANDBOX_REA
DY,CreatedAt:1716209315336333749,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.182:2379,kubernetes.io/config.hash: c625499e3affdd6ad46b9f9df2e2d950,kubernetes.io/config.seen: 2024-05-20T12:37:37.883360400Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&PodSandboxMetadata{Name:kube-proxy-z5zvt,Uid:fd9f5f1f-60ac-4567-8d5c-b2de0404623f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209315334769665,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d
5c-b2de0404623f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:37:51.464267381Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-252263,Uid:53a203f8e0978c311771fe427cfc08bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209315318098566,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.182:8443,kubernetes.io/config.hash: 53a203f8e0978c311771fe427cfc08bc,kubernetes.io/config.seen: 2024-05-20T12:37:37.883363275Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b1
4fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-252263,Uid:52a55b737ed9f789145db5fccf1c1af9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209315293863358,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 52a55b737ed9f789145db5fccf1c1af9,kubernetes.io/config.seen: 2024-05-20T12:37:37.883364267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&PodSandboxMetadata{Name:kindnet-8vkjc,Uid:b222e7ad-6005-42bf-867f-40b94d584782,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716209310316614408,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:37:51.494686355Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-vdgxd,Uid:57097c7d-bdee-48f4-8736-264f6cfaee92,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716208820096720251,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:40:19.773809030Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-c2vkj,Uid:a5fa83f0-abaa-4c78-8d08-124503934fb1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716208674041282288,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:37:53.721563927Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-96h5w,Uid:3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716208674021866541,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:37:53.714497397Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&PodSandboxMetadata{Name:kube-proxy-z5zvt,Uid:fd9f5f1f-60ac-4567-8d5c-b2de0404623f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716208671776528032,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T12:37:51.464267381Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-252263,Uid:140ef0230d166f054d4e1035bde09336,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716208651591797040,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 140ef0230d166f054d4e1035bde09336,kubernetes.io/config.seen: 2024-05-20T12:37:31.081184831Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&PodSandboxMetadata{Name:etcd-ha-252263,Uid:c625499e3affdd6ad46b9f9df2e2d950,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716208651564189808,Labels:map[string]string{component: etcd,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.182:2379,kubernetes.io/config.hash: c625499e3affdd6ad46b9f9df2e2d950,kubernetes.io/config.seen: 2024-05-20T12:37:31.081180339Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8d48f1a6-c0d9-4b74-9019-cadfd6d24cac name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.119546740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3188e4b-26ea-49dc-8d7f-ca616da9b841 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.119603116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3188e4b-26ea-49dc-8d7f-ca616da9b841 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.120121254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55688fae5ad571a2951d009a710fdd76606eed7d23f1a4d34088028e5cdfa8a4,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209397950618168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716209390940067511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209359940574305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209357950417352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716209355943478973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129f71aae7a20f88722fa5b23d17d7c8c5e42a6c5f7285856acf009dcaed3577,PodSandboxId:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716209349209494114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76eb61ab14b8302988f449747a7e1c3b0dd0b1e09b0b53dbb5a79a84aa238731,PodSandboxId:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716209330142138803,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127,PodSandboxId:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209316309194383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a2e85d5
f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e,PodSandboxId:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316010461868,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f,PodSandboxId:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316001562993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792,PodSandboxId:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209315833466317,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2,PodSandboxId:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209315774510078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9
f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716209315704203583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789
145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716209315717630530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716209310598131705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kuber
netes.container.hash: 195c0558,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716208821244678391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kuberne
tes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674333448654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674327197782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716208672039078656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716208651871813725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716208651761199232,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3188e4b-26ea-49dc-8d7f-ca616da9b841 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.161534237Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e517534b-a1a9-4b53-b2bc-d5db55d665a2 name=/runtime.v1.RuntimeService/Version
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.161637608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e517534b-a1a9-4b53-b2bc-d5db55d665a2 name=/runtime.v1.RuntimeService/Version
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.162622076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d1ec6ba-519c-4e0f-8c8f-855f69680df7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.163378959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209469163354088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d1ec6ba-519c-4e0f-8c8f-855f69680df7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.163865506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23fcfa66-d580-4c8c-b122-74c114d445b5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.163984878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23fcfa66-d580-4c8c-b122-74c114d445b5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.164356032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55688fae5ad571a2951d009a710fdd76606eed7d23f1a4d34088028e5cdfa8a4,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209397950618168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716209390940067511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209359940574305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209357950417352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716209355943478973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129f71aae7a20f88722fa5b23d17d7c8c5e42a6c5f7285856acf009dcaed3577,PodSandboxId:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716209349209494114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76eb61ab14b8302988f449747a7e1c3b0dd0b1e09b0b53dbb5a79a84aa238731,PodSandboxId:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716209330142138803,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127,PodSandboxId:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209316309194383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a2e85d5
f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e,PodSandboxId:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316010461868,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f,PodSandboxId:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316001562993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792,PodSandboxId:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209315833466317,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2,PodSandboxId:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209315774510078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9
f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716209315704203583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789
145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716209315717630530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716209310598131705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kuber
netes.container.hash: 195c0558,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716208821244678391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kuberne
tes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674333448654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674327197782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716208672039078656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716208651871813725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716208651761199232,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23fcfa66-d580-4c8c-b122-74c114d445b5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.211344639Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa6e126b-5470-4f70-8538-c441524ce2bb name=/runtime.v1.RuntimeService/Version
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.211434750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa6e126b-5470-4f70-8538-c441524ce2bb name=/runtime.v1.RuntimeService/Version
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.212347288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37c3db56-07cf-416e-8381-5d363532c03f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.212757042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209469212734514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37c3db56-07cf-416e-8381-5d363532c03f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.213282914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db2ca74a-1ccb-40a8-ad19-8a4329daf567 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.213334682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db2ca74a-1ccb-40a8-ad19-8a4329daf567 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:51:09 ha-252263 crio[3781]: time="2024-05-20 12:51:09.213744810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55688fae5ad571a2951d009a710fdd76606eed7d23f1a4d34088028e5cdfa8a4,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209397950618168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716209390940067511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209359940574305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209357950417352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716209355943478973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129f71aae7a20f88722fa5b23d17d7c8c5e42a6c5f7285856acf009dcaed3577,PodSandboxId:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716209349209494114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76eb61ab14b8302988f449747a7e1c3b0dd0b1e09b0b53dbb5a79a84aa238731,PodSandboxId:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716209330142138803,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127,PodSandboxId:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209316309194383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a2e85d5
f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e,PodSandboxId:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316010461868,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f,PodSandboxId:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316001562993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792,PodSandboxId:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209315833466317,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2,PodSandboxId:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209315774510078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9
f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716209315704203583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789
145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716209315717630530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716209310598131705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kuber
netes.container.hash: 195c0558,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716208821244678391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kuberne
tes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674333448654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674327197782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716208672039078656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716208651871813725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716208651761199232,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db2ca74a-1ccb-40a8-ad19-8a4329daf567 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	55688fae5ad57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   b40196f493a75       storage-provisioner
	0c1f331e32feb       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   402b31683e2d3       kindnet-8vkjc
	1779ba907d699       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            3                   cb0fd61b6b947       kube-apiserver-ha-252263
	bbdb833df0479       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   2                   8b14fedca25ac       kube-controller-manager-ha-252263
	ea90ef3e02cff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   b40196f493a75       storage-provisioner
	129f71aae7a20       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   8d0e14d109707       busybox-fc5497c4f-vdgxd
	76eb61ab14b83       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   af352eb3fc186       kube-vip-ha-252263
	ece57eb718aac       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   79174bbdb164d       kube-proxy-z5zvt
	3a2e85d5f6d40       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0922184556c5d       coredns-7db6d8ff4d-96h5w
	b1bfad9b3a0b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   acb60c855b092       coredns-7db6d8ff4d-c2vkj
	a527eb856411d       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   cccebdc1b35d5       kube-scheduler-ha-252263
	daebcc18593c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   9b343fc81ed0f       etcd-ha-252263
	3994d5ac68b46       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   cb0fd61b6b947       kube-apiserver-ha-252263
	d506292b9f275       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   8b14fedca25ac       kube-controller-manager-ha-252263
	f0fa157b9750d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   402b31683e2d3       kindnet-8vkjc
	7fb77a13cb639       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   e3f7317af104f       busybox-fc5497c4f-vdgxd
	0aaaa2c2d0a2a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   8217c5dc10b50       coredns-7db6d8ff4d-c2vkj
	81df7a9501142       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   43b0b303d8ecf       coredns-7db6d8ff4d-96h5w
	8481a0a858b8f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago       Exited              kube-proxy                0                   85f3c6afc77a5       kube-proxy-z5zvt
	8516a1fdea0a5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago       Exited              kube-scheduler            0                   e9f3670ad0515       kube-scheduler-ha-252263
	57b99e90b3f2c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   9dcb3183f7b71       etcd-ha-252263
	
	
	==> coredns [0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a] <==
	[INFO] 10.244.2.2:33816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646431s
	[INFO] 10.244.2.2:35739 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000262525s
	[INFO] 10.244.2.2:38598 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158046s
	[INFO] 10.244.2.2:58591 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129009s
	[INFO] 10.244.2.2:42154 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077099s
	[INFO] 10.244.1.2:55966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236408s
	[INFO] 10.244.1.2:38116 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165417s
	[INFO] 10.244.1.2:42765 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013421s
	[INFO] 10.244.0.4:43917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087757s
	[INFO] 10.244.2.2:39196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131607s
	[INFO] 10.244.2.2:53256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139178s
	[INFO] 10.244.2.2:51674 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089462s
	[INFO] 10.244.2.2:49072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088789s
	[INFO] 10.244.1.2:56181 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013731s
	[INFO] 10.244.1.2:41238 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121064s
	[INFO] 10.244.0.4:51538 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100171s
	[INFO] 10.244.2.2:59762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112653s
	[INFO] 10.244.2.2:48400 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080614s
	[INFO] 10.244.1.2:54360 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166063s
	[INFO] 10.244.1.2:51350 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071222s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1844&timeout=6m41s&timeoutSeconds=401&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1844&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1844&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3a2e85d5f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46806->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46806->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7] <==
	[INFO] 10.244.1.2:51684 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008851s
	[INFO] 10.244.1.2:37865 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122394s
	[INFO] 10.244.0.4:41864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103464s
	[INFO] 10.244.0.4:48776 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078784s
	[INFO] 10.244.0.4:50703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060251s
	[INFO] 10.244.1.2:44802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115237s
	[INFO] 10.244.1.2:33948 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012433s
	[INFO] 10.244.0.4:54781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008753s
	[INFO] 10.244.0.4:54168 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243725s
	[INFO] 10.244.0.4:60539 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140289s
	[INFO] 10.244.2.2:37865 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093682s
	[INFO] 10.244.2.2:38339 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116317s
	[INFO] 10.244.1.2:44551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117883s
	[INFO] 10.244.1.2:42004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008187s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1788": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1788": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[776725303]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 12:48:47.556) (total time: 11492ms):
	Trace[776725303]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37466->10.96.0.1:443: read: connection reset by peer 11492ms (12:48:59.049)
	Trace[776725303]: [11.492842619s] [11.492842619s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-252263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T12_37_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:51:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:49:18 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:49:18 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:49:18 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:49:18 +0000   Mon, 20 May 2024 12:37:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-252263
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 35935ea8555a4df9a418abd1fd7734ca
	  System UUID:                35935ea8-555a-4df9-a418-abd1fd7734ca
	  Boot ID:                    96326bcd-6af4-4e73-8e52-8d2d55c0ef49
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vdgxd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-96h5w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-c2vkj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-252263                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-8vkjc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-252263             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-252263    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-z5zvt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-252263             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-252263                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 110s   kube-proxy       
	  Normal   Starting                 13m    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-252263 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-252263 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-252263 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-252263 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-252263 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Warning  ContainerGCFailed        3m32s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           104s   node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal   RegisteredNode           95s    node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal   RegisteredNode           32s    node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	
	
	Name:               ha-252263-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_38_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:38:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:51:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:50:01 +0000   Mon, 20 May 2024 12:49:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:50:01 +0000   Mon, 20 May 2024 12:49:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:50:01 +0000   Mon, 20 May 2024 12:49:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:50:01 +0000   Mon, 20 May 2024 12:49:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-252263-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 39c8edfb8be441aab0eaa91516d89ad1
	  System UUID:                39c8edfb-8be4-41aa-b0ea-a91516d89ad1
	  Boot ID:                    c0a161fe-111b-4df4-b1a3-a438fa28cf3b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xqdrj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-252263-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-lfz72                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-252263-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-252263-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-84x7f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-252263-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-252263-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 82s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-252263-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-252263-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-252263-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  NodeNotReady             9m11s                  node-controller  Node ha-252263-m02 status is now: NodeNotReady
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node ha-252263-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node ha-252263-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m16s)  kubelet          Node ha-252263-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s                   node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           95s                    node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	
	
	Name:               ha-252263-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_39_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:39:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:50:39 +0000   Mon, 20 May 2024 12:50:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:50:39 +0000   Mon, 20 May 2024 12:50:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:50:39 +0000   Mon, 20 May 2024 12:50:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:50:39 +0000   Mon, 20 May 2024 12:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-252263-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3787355a13534f32abf4729d5f862897
	  System UUID:                3787355a-1353-4f32-abf4-729d5f862897
	  Boot ID:                    d04dab9a-02e8-4ea1-ba23-b81e715588ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xq6j6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-252263-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-d67g2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-252263-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-252263-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-c8zs5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-252263-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-252263-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 43s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-252263-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-252263-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-252263-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-252263-m03 status is now: NodeNotReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 60s (x2 over 60s)  kubelet          Node ha-252263-m03 has been rebooted, boot id: d04dab9a-02e8-4ea1-ba23-b81e715588ce
	  Normal   NodeHasSufficientMemory  60s (x3 over 60s)  kubelet          Node ha-252263-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x3 over 60s)  kubelet          Node ha-252263-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x3 over 60s)  kubelet          Node ha-252263-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             60s                kubelet          Node ha-252263-m03 status is now: NodeNotReady
	  Normal   NodeReady                60s                kubelet          Node ha-252263-m03 status is now: NodeReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-252263-m03 event: Registered Node ha-252263-m03 in Controller
	
	
	Name:               ha-252263-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_40_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:40:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:51:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:51:02 +0000   Mon, 20 May 2024 12:51:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:51:02 +0000   Mon, 20 May 2024 12:51:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:51:02 +0000   Mon, 20 May 2024 12:51:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:51:02 +0000   Mon, 20 May 2024 12:51:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-252263-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e01b8d01b7b3442aafbd1460443cc06b
	  System UUID:                e01b8d01-b7b3-442a-afbd-1460443cc06b
	  Boot ID:                    58c85148-8788-44b4-9405-a7fc7e26d1ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5st4d       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-gww58    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-252263-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-252263-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-252263-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-252263-m04 status is now: NodeReady
	  Normal   RegisteredNode           104s               node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-252263-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   Starting                 7s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7s (x3 over 7s)    kubelet          Node ha-252263-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s (x3 over 7s)    kubelet          Node ha-252263-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s (x3 over 7s)    kubelet          Node ha-252263-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 7s (x2 over 7s)    kubelet          Node ha-252263-m04 has been rebooted, boot id: 58c85148-8788-44b4-9405-a7fc7e26d1ce
	  Normal   NodeReady                7s (x2 over 7s)    kubelet          Node ha-252263-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.720517] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.056941] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063479] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.182637] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.137786] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261133] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.100200] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.178110] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.059165] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.929456] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.070241] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.174384] kauditd_printk_skb: 21 callbacks suppressed
	[May20 12:38] kauditd_printk_skb: 74 callbacks suppressed
	[May20 12:48] systemd-fstab-generator[3701]: Ignoring "noauto" option for root device
	[  +0.145232] systemd-fstab-generator[3713]: Ignoring "noauto" option for root device
	[  +0.172934] systemd-fstab-generator[3727]: Ignoring "noauto" option for root device
	[  +0.147920] systemd-fstab-generator[3739]: Ignoring "noauto" option for root device
	[  +0.278787] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +5.238925] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[  +0.084712] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.997490] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.164477] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.056224] kauditd_printk_skb: 1 callbacks suppressed
	[May20 12:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b] <==
	2024/05/20 12:46:51 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T12:46:51.821116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"993.905199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-05-20T12:46:51.821126Z","caller":"traceutil/trace.go:171","msg":"trace[120374282] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"993.925998ms","start":"2024-05-20T12:46:50.827198Z","end":"2024-05-20T12:46:51.821124Z","steps":["trace[120374282] 'agreement among raft nodes before linearized reading'  (duration: 993.914319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:46:51.821138Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:46:50.827194Z","time spent":"993.940422ms","remote":"127.0.0.1:49620","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" limit:10000 "}
	2024/05/20 12:46:51 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T12:46:51.893337Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.182:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T12:46:51.893386Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.182:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T12:46:51.893459Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"50ad4904f737d679","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-20T12:46:51.893654Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893684Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893726Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893773Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893815Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893864Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893875Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.89388Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.893889Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.893991Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.894046Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.894093Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.894118Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.894145Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.896434Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.182:2380"}
	{"level":"info","ts":"2024-05-20T12:46:51.896679Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.182:2380"}
	{"level":"info","ts":"2024-05-20T12:46:51.896711Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-252263","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.182:2380"],"advertise-client-urls":["https://192.168.39.182:2379"]}
	
	
	==> etcd [daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2] <==
	{"level":"warn","ts":"2024-05-20T12:50:06.89307Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.60:2380/version","remote-member-id":"30f45f742d7f2ecf","error":"Get \"https://192.168.39.60:2380/version\": dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:06.893126Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"30f45f742d7f2ecf","error":"Get \"https://192.168.39.60:2380/version\": dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:10.894524Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.60:2380/version","remote-member-id":"30f45f742d7f2ecf","error":"Get \"https://192.168.39.60:2380/version\": dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:10.894584Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"30f45f742d7f2ecf","error":"Get \"https://192.168.39.60:2380/version\": dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:11.858741Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"30f45f742d7f2ecf","rtt":"0s","error":"dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:11.858864Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"30f45f742d7f2ecf","rtt":"0s","error":"dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:14.896071Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.60:2380/version","remote-member-id":"30f45f742d7f2ecf","error":"Get \"https://192.168.39.60:2380/version\": dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:14.89619Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"30f45f742d7f2ecf","error":"Get \"https://192.168.39.60:2380/version\": dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-20T12:50:15.772021Z","caller":"traceutil/trace.go:171","msg":"trace[2141637668] linearizableReadLoop","detail":"{readStateIndex:2688; appliedIndex:2688; }","duration":"216.020569ms","start":"2024-05-20T12:50:15.555981Z","end":"2024-05-20T12:50:15.772002Z","steps":["trace[2141637668] 'read index received'  (duration: 216.015053ms)","trace[2141637668] 'applied index is now lower than readState.Index'  (duration: 4.176µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:50:15.773393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.366657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"warn","ts":"2024-05-20T12:50:15.773538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.93694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-252263-m03\" ","response":"range_response_count:1 size:5666"}
	{"level":"info","ts":"2024-05-20T12:50:15.773594Z","caller":"traceutil/trace.go:171","msg":"trace[2014186787] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-252263-m03; range_end:; response_count:1; response_revision:2325; }","duration":"130.020454ms","start":"2024-05-20T12:50:15.643566Z","end":"2024-05-20T12:50:15.773586Z","steps":["trace[2014186787] 'agreement among raft nodes before linearized reading'  (duration: 129.854668ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:50:15.77366Z","caller":"traceutil/trace.go:171","msg":"trace[178152452] transaction","detail":"{read_only:false; response_revision:2325; number_of_response:1; }","duration":"223.297639ms","start":"2024-05-20T12:50:15.550354Z","end":"2024-05-20T12:50:15.773652Z","steps":["trace[178152452] 'process raft request'  (duration: 222.104088ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:50:15.773545Z","caller":"traceutil/trace.go:171","msg":"trace[1947810696] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2324; }","duration":"217.661673ms","start":"2024-05-20T12:50:15.555867Z","end":"2024-05-20T12:50:15.773529Z","steps":["trace[1947810696] 'agreement among raft nodes before linearized reading'  (duration: 216.468916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:50:16.859662Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"30f45f742d7f2ecf","rtt":"0s","error":"dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:16.859736Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"30f45f742d7f2ecf","rtt":"0s","error":"dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:18.897666Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.60:2380/version","remote-member-id":"30f45f742d7f2ecf","error":"Get \"https://192.168.39.60:2380/version\": dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T12:50:18.897739Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"30f45f742d7f2ecf","error":"Get \"https://192.168.39.60:2380/version\": dial tcp 192.168.39.60:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-20T12:50:19.878792Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:50:19.878968Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:50:19.888799Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:50:19.929306Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"50ad4904f737d679","to":"30f45f742d7f2ecf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-20T12:50:19.929361Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:50:19.945499Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"50ad4904f737d679","to":"30f45f742d7f2ecf","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-20T12:50:19.945714Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	
	
	==> kernel <==
	 12:51:09 up 14 min,  0 users,  load average: 0.10, 0.29, 0.23
	Linux ha-252263 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd] <==
	I0520 12:50:31.827199       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:50:41.844172       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:50:41.845509       1 main.go:227] handling current node
	I0520 12:50:41.845587       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:50:41.845665       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:50:41.845849       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:50:41.845967       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:50:41.846075       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:50:41.846103       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:50:51.860237       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:50:51.860290       1 main.go:227] handling current node
	I0520 12:50:51.860306       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:50:51.860313       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:50:51.860445       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:50:51.860458       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:50:51.860557       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:50:51.860593       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:51:01.877239       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:51:01.877328       1 main.go:227] handling current node
	I0520 12:51:01.877369       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:51:01.877388       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:51:01.877502       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0520 12:51:01.877522       1 main.go:250] Node ha-252263-m03 has CIDR [10.244.2.0/24] 
	I0520 12:51:01.877579       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:51:01.877597       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127] <==
	I0520 12:48:31.064626       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0520 12:48:31.064774       1 main.go:107] hostIP = 192.168.39.182
	podIP = 192.168.39.182
	I0520 12:48:31.065029       1 main.go:116] setting mtu 1500 for CNI 
	I0520 12:48:31.065078       1 main.go:146] kindnetd IP family: "ipv4"
	I0520 12:48:31.065112       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0520 12:48:31.363404       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0520 12:48:34.473345       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 12:48:37.545353       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 12:48:40.617491       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 12:48:53.629512       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79] <==
	I0520 12:49:21.762049       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0520 12:49:21.762083       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 12:49:21.813829       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 12:49:21.821271       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 12:49:21.821307       1 policy_source.go:224] refreshing policies
	I0520 12:49:21.830543       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 12:49:21.852297       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 12:49:21.853842       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 12:49:21.854155       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 12:49:21.855055       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 12:49:21.855236       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 12:49:21.855322       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 12:49:21.862529       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 12:49:21.865322       1 aggregator.go:165] initial CRD sync complete...
	I0520 12:49:21.866020       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 12:49:21.866583       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 12:49:21.866632       1 cache.go:39] Caches are synced for autoregister controller
	I0520 12:49:21.865971       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0520 12:49:21.873845       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.22 192.168.39.60]
	I0520 12:49:21.875144       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:49:21.884952       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0520 12:49:21.888038       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0520 12:49:22.759161       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0520 12:49:23.108874       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.22 192.168.39.60]
	W0520 12:49:33.110505       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.22]
	
	
	==> kube-apiserver [3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1] <==
	I0520 12:48:36.521847       1 options.go:221] external host was not specified, using 192.168.39.182
	I0520 12:48:36.526282       1 server.go:148] Version: v1.30.1
	I0520 12:48:36.526330       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:48:37.045856       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0520 12:48:37.046134       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0520 12:48:37.046293       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0520 12:48:37.046449       1 instance.go:299] Using reconciler: lease
	I0520 12:48:37.046184       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0520 12:48:57.044273       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0520 12:48:57.044335       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0520 12:48:57.047806       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853] <==
	I0520 12:49:34.190186       1 shared_informer.go:320] Caches are synced for PVC protection
	I0520 12:49:34.193453       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 12:49:34.197110       1 shared_informer.go:320] Caches are synced for daemon sets
	I0520 12:49:34.301807       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 12:49:34.358309       1 shared_informer.go:320] Caches are synced for disruption
	I0520 12:49:34.375310       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 12:49:34.414506       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 12:49:34.825004       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:49:34.833340       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:49:34.833407       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 12:49:44.111029       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gq27x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gq27x\": the object has been modified; please apply your changes to the latest version and try again"
	I0520 12:49:44.111547       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"42b27f68-47ff-4567-9968-ad6739c2f4a0", APIVersion:"v1", ResourceVersion:"257", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gq27x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gq27x": the object has been modified; please apply your changes to the latest version and try again
	I0520 12:49:44.135345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.763046ms"
	I0520 12:49:44.135778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="210.288µs"
	I0520 12:49:44.174403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.512901ms"
	I0520 12:49:44.174511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.416µs"
	I0520 12:49:44.396835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.023µs"
	I0520 12:49:48.964155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.964377ms"
	I0520 12:49:48.964353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.85µs"
	I0520 12:50:05.295842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.768171ms"
	I0520 12:50:05.296035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.609µs"
	I0520 12:50:10.355184       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.881µs"
	I0520 12:50:27.608399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.189602ms"
	I0520 12:50:27.608597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.813µs"
	I0520 12:51:02.195819       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-252263-m04"
	
	
	==> kube-controller-manager [d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2] <==
	I0520 12:48:36.881521       1 serving.go:380] Generated self-signed cert in-memory
	I0520 12:48:37.299070       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 12:48:37.299156       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:48:37.300663       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 12:48:37.301013       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 12:48:37.301191       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0520 12:48:37.301340       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0520 12:48:58.052731       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.182:8443/healthz\": dial tcp 192.168.39.182:8443: connect: connection refused"
	
	
	==> kube-proxy [8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e] <==
	E0520 12:45:42.634170       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:45.707254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:45.707448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:45.707608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:45.707552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:45.707685       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:45.707778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:51.850221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:51.850687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:51.850809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:51.851022       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:51.850955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:51.851107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:01.066377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:01.066523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:04.138835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:04.139297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:04.139462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:04.139522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:19.497253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:19.497753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:22.570109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:22.570347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:22.570471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:22.570513       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127] <==
	I0520 12:48:37.319815       1 server_linux.go:69] "Using iptables proxy"
	E0520 12:48:37.738050       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 12:48:40.810880       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 12:48:43.881357       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 12:48:50.026972       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 12:49:02.313996       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0520 12:49:18.772625       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.182"]
	I0520 12:49:18.833757       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:49:18.833847       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:49:18.833877       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:49:18.836741       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:49:18.837141       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:49:18.837357       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:49:18.838871       1 config.go:192] "Starting service config controller"
	I0520 12:49:18.838967       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:49:18.839015       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:49:18.839033       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:49:18.839637       1 config.go:319] "Starting node config controller"
	I0520 12:49:18.839674       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:49:18.939991       1 shared_informer.go:320] Caches are synced for node config
	I0520 12:49:18.940081       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:49:18.940136       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290] <==
	W0520 12:46:48.577796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:46:48.577942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:46:48.697680       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 12:46:48.697780       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 12:46:48.721814       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:46:48.721953       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:46:48.782494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:46:48.782640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 12:46:48.807110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:46:48.807189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:46:48.918808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 12:46:48.918968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 12:46:49.055551       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:46:49.055715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:46:49.803504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:46:49.803554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:46:50.124720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:46:50.124808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:46:50.219194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:46:50.219288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:46:50.309520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:46:50.309552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:46:50.940146       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 12:46:50.940261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 12:46:51.803280       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792] <==
	W0520 12:49:14.731209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.182:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:14.731315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.182:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:15.092341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.182:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:15.092421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.182:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:15.555413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:15.555477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:15.653644       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.182:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:15.653730       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.182:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:16.775479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.182:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:16.775534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.182:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:17.543703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.182:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:17.543773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.182:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:17.823633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.182:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:17.823705       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.182:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:18.388446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.182:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:18.388503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.182:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:18.644116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:18.644174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:18.779659       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:18.779722       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:18.872582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.182:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:18.872641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.182:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:21.790760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:49:21.790961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0520 12:49:41.260947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:49:19 ha-252263 kubelet[1370]: I0520 12:49:19.928548    1370 scope.go:117] "RemoveContainer" containerID="3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1"
	May 20 12:49:24 ha-252263 kubelet[1370]: I0520 12:49:24.928779    1370 scope.go:117] "RemoveContainer" containerID="f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127"
	May 20 12:49:24 ha-252263 kubelet[1370]: E0520 12:49:24.929661    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-8vkjc_kube-system(b222e7ad-6005-42bf-867f-40b94d584782)\"" pod="kube-system/kindnet-8vkjc" podUID="b222e7ad-6005-42bf-867f-40b94d584782"
	May 20 12:49:27 ha-252263 kubelet[1370]: I0520 12:49:27.473537    1370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-vdgxd" podStartSLOduration=547.56549681 podStartE2EDuration="9m8.473458619s" podCreationTimestamp="2024-05-20 12:40:19 +0000 UTC" firstStartedPulling="2024-05-20 12:40:20.320200286 +0000 UTC m=+162.536110359" lastFinishedPulling="2024-05-20 12:40:21.228162098 +0000 UTC m=+163.444072168" observedRunningTime="2024-05-20 12:40:21.645371262 +0000 UTC m=+163.861281355" watchObservedRunningTime="2024-05-20 12:49:27.473458619 +0000 UTC m=+709.689368708"
	May 20 12:49:32 ha-252263 kubelet[1370]: I0520 12:49:32.928123    1370 scope.go:117] "RemoveContainer" containerID="ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf"
	May 20 12:49:32 ha-252263 kubelet[1370]: E0520 12:49:32.929061    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5db18dbf-710f-4c10-84bb-c5120c865740)\"" pod="kube-system/storage-provisioner" podUID="5db18dbf-710f-4c10-84bb-c5120c865740"
	May 20 12:49:35 ha-252263 kubelet[1370]: I0520 12:49:35.929005    1370 scope.go:117] "RemoveContainer" containerID="f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127"
	May 20 12:49:35 ha-252263 kubelet[1370]: E0520 12:49:35.929258    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-8vkjc_kube-system(b222e7ad-6005-42bf-867f-40b94d584782)\"" pod="kube-system/kindnet-8vkjc" podUID="b222e7ad-6005-42bf-867f-40b94d584782"
	May 20 12:49:37 ha-252263 kubelet[1370]: E0520 12:49:37.945356    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:49:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:49:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:49:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:49:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:49:45 ha-252263 kubelet[1370]: I0520 12:49:45.929691    1370 scope.go:117] "RemoveContainer" containerID="ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf"
	May 20 12:49:45 ha-252263 kubelet[1370]: E0520 12:49:45.930052    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5db18dbf-710f-4c10-84bb-c5120c865740)\"" pod="kube-system/storage-provisioner" podUID="5db18dbf-710f-4c10-84bb-c5120c865740"
	May 20 12:49:50 ha-252263 kubelet[1370]: I0520 12:49:50.928332    1370 scope.go:117] "RemoveContainer" containerID="f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127"
	May 20 12:49:57 ha-252263 kubelet[1370]: I0520 12:49:57.928765    1370 scope.go:117] "RemoveContainer" containerID="ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf"
	May 20 12:50:15 ha-252263 kubelet[1370]: I0520 12:50:15.928383    1370 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-252263" podUID="6e5827b4-5a1c-4523-9282-8c901ab68b5a"
	May 20 12:50:15 ha-252263 kubelet[1370]: I0520 12:50:15.948245    1370 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-252263"
	May 20 12:50:17 ha-252263 kubelet[1370]: I0520 12:50:17.953168    1370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-252263" podStartSLOduration=2.953143597 podStartE2EDuration="2.953143597s" podCreationTimestamp="2024-05-20 12:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 12:50:17.952768164 +0000 UTC m=+760.168678253" watchObservedRunningTime="2024-05-20 12:50:17.953143597 +0000 UTC m=+760.169053688"
	May 20 12:50:37 ha-252263 kubelet[1370]: E0520 12:50:37.952184    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:50:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:50:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:50:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:50:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 12:51:08.743848  882572 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18932-852915/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-252263 -n ha-252263
E0520 12:51:10.516381  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context ha-252263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (381.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 stop -v=7 --alsologtostderr: exit status 82 (2m0.483341788s)

                                                
                                                
-- stdout --
	* Stopping node "ha-252263-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:51:28.422570  882976 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:51:28.422914  882976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:51:28.422925  882976 out.go:304] Setting ErrFile to fd 2...
	I0520 12:51:28.422929  882976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:51:28.423088  882976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:51:28.423331  882976 out.go:298] Setting JSON to false
	I0520 12:51:28.423408  882976 mustload.go:65] Loading cluster: ha-252263
	I0520 12:51:28.423785  882976 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:51:28.423926  882976 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:51:28.424160  882976 mustload.go:65] Loading cluster: ha-252263
	I0520 12:51:28.424356  882976 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:51:28.424392  882976 stop.go:39] StopHost: ha-252263-m04
	I0520 12:51:28.424905  882976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:51:28.424963  882976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:51:28.440062  882976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0520 12:51:28.440538  882976 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:51:28.441131  882976 main.go:141] libmachine: Using API Version  1
	I0520 12:51:28.441154  882976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:51:28.441521  882976 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:51:28.443737  882976 out.go:177] * Stopping node "ha-252263-m04"  ...
	I0520 12:51:28.445320  882976 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 12:51:28.445358  882976 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:51:28.445585  882976 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 12:51:28.445615  882976 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:51:28.448731  882976 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:51:28.449231  882976 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:50:56 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:51:28.449260  882976 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:51:28.449409  882976 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:51:28.449595  882976 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:51:28.449782  882976 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:51:28.449961  882976 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	I0520 12:51:28.541687  882976 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 12:51:28.594617  882976 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 12:51:28.647550  882976 main.go:141] libmachine: Stopping "ha-252263-m04"...
	I0520 12:51:28.647590  882976 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:51:28.649090  882976 main.go:141] libmachine: (ha-252263-m04) Calling .Stop
	I0520 12:51:28.652385  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 0/120
	I0520 12:51:29.653841  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 1/120
	I0520 12:51:30.655181  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 2/120
	I0520 12:51:31.656363  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 3/120
	I0520 12:51:32.657663  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 4/120
	I0520 12:51:33.659438  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 5/120
	I0520 12:51:34.660840  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 6/120
	I0520 12:51:35.662204  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 7/120
	I0520 12:51:36.663813  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 8/120
	I0520 12:51:37.665285  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 9/120
	I0520 12:51:38.667811  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 10/120
	I0520 12:51:39.669177  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 11/120
	I0520 12:51:40.670679  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 12/120
	I0520 12:51:41.672150  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 13/120
	I0520 12:51:42.673399  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 14/120
	I0520 12:51:43.675273  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 15/120
	I0520 12:51:44.677308  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 16/120
	I0520 12:51:45.678637  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 17/120
	I0520 12:51:46.680042  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 18/120
	I0520 12:51:47.681269  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 19/120
	I0520 12:51:48.683637  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 20/120
	I0520 12:51:49.685398  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 21/120
	I0520 12:51:50.687631  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 22/120
	I0520 12:51:51.689144  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 23/120
	I0520 12:51:52.690656  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 24/120
	I0520 12:51:53.692324  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 25/120
	I0520 12:51:54.694015  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 26/120
	I0520 12:51:55.695587  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 27/120
	I0520 12:51:56.697059  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 28/120
	I0520 12:51:57.698619  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 29/120
	I0520 12:51:58.700259  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 30/120
	I0520 12:51:59.701484  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 31/120
	I0520 12:52:00.703112  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 32/120
	I0520 12:52:01.704483  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 33/120
	I0520 12:52:02.705888  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 34/120
	I0520 12:52:03.707835  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 35/120
	I0520 12:52:04.709123  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 36/120
	I0520 12:52:05.710944  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 37/120
	I0520 12:52:06.712241  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 38/120
	I0520 12:52:07.714180  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 39/120
	I0520 12:52:08.716537  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 40/120
	I0520 12:52:09.718351  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 41/120
	I0520 12:52:10.719759  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 42/120
	I0520 12:52:11.721479  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 43/120
	I0520 12:52:12.723569  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 44/120
	I0520 12:52:13.725896  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 45/120
	I0520 12:52:14.727363  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 46/120
	I0520 12:52:15.729419  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 47/120
	I0520 12:52:16.730805  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 48/120
	I0520 12:52:17.732186  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 49/120
	I0520 12:52:18.734405  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 50/120
	I0520 12:52:19.736372  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 51/120
	I0520 12:52:20.737749  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 52/120
	I0520 12:52:21.739351  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 53/120
	I0520 12:52:22.742164  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 54/120
	I0520 12:52:23.743950  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 55/120
	I0520 12:52:24.745253  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 56/120
	I0520 12:52:25.746744  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 57/120
	I0520 12:52:26.747962  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 58/120
	I0520 12:52:27.749509  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 59/120
	I0520 12:52:28.751424  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 60/120
	I0520 12:52:29.752986  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 61/120
	I0520 12:52:30.754498  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 62/120
	I0520 12:52:31.756328  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 63/120
	I0520 12:52:32.757825  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 64/120
	I0520 12:52:33.760135  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 65/120
	I0520 12:52:34.761531  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 66/120
	I0520 12:52:35.764116  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 67/120
	I0520 12:52:36.765875  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 68/120
	I0520 12:52:37.767309  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 69/120
	I0520 12:52:38.769223  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 70/120
	I0520 12:52:39.770883  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 71/120
	I0520 12:52:40.772894  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 72/120
	I0520 12:52:41.774276  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 73/120
	I0520 12:52:42.775776  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 74/120
	I0520 12:52:43.777857  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 75/120
	I0520 12:52:44.779318  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 76/120
	I0520 12:52:45.780738  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 77/120
	I0520 12:52:46.782106  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 78/120
	I0520 12:52:47.783640  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 79/120
	I0520 12:52:48.785375  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 80/120
	I0520 12:52:49.786838  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 81/120
	I0520 12:52:50.788349  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 82/120
	I0520 12:52:51.790812  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 83/120
	I0520 12:52:52.792516  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 84/120
	I0520 12:52:53.794173  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 85/120
	I0520 12:52:54.795661  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 86/120
	I0520 12:52:55.797374  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 87/120
	I0520 12:52:56.799133  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 88/120
	I0520 12:52:57.800495  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 89/120
	I0520 12:52:58.802505  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 90/120
	I0520 12:52:59.803783  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 91/120
	I0520 12:53:00.804943  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 92/120
	I0520 12:53:01.806211  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 93/120
	I0520 12:53:02.807898  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 94/120
	I0520 12:53:03.809764  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 95/120
	I0520 12:53:04.811446  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 96/120
	I0520 12:53:05.812866  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 97/120
	I0520 12:53:06.814288  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 98/120
	I0520 12:53:07.815816  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 99/120
	I0520 12:53:08.817659  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 100/120
	I0520 12:53:09.819028  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 101/120
	I0520 12:53:10.821331  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 102/120
	I0520 12:53:11.822714  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 103/120
	I0520 12:53:12.824234  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 104/120
	I0520 12:53:13.826256  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 105/120
	I0520 12:53:14.827688  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 106/120
	I0520 12:53:15.829481  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 107/120
	I0520 12:53:16.830971  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 108/120
	I0520 12:53:17.832189  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 109/120
	I0520 12:53:18.834360  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 110/120
	I0520 12:53:19.835647  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 111/120
	I0520 12:53:20.838066  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 112/120
	I0520 12:53:21.839387  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 113/120
	I0520 12:53:22.841546  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 114/120
	I0520 12:53:23.843183  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 115/120
	I0520 12:53:24.845525  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 116/120
	I0520 12:53:25.847059  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 117/120
	I0520 12:53:26.849670  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 118/120
	I0520 12:53:27.851088  882976 main.go:141] libmachine: (ha-252263-m04) Waiting for machine to stop 119/120
	I0520 12:53:28.851625  882976 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 12:53:28.851702  882976 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 12:53:28.853455  882976 out.go:177] 
	W0520 12:53:28.854658  882976 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 12:53:28.854671  882976 out.go:239] * 
	* 
	W0520 12:53:28.858961  882976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 12:53:28.860064  882976 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-252263 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr: exit status 3 (18.937160068s)

                                                
                                                
-- stdout --
	ha-252263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-252263-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:53:28.911258  883442 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:53:28.911505  883442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:53:28.911513  883442 out.go:304] Setting ErrFile to fd 2...
	I0520 12:53:28.911517  883442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:53:28.911706  883442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:53:28.911875  883442 out.go:298] Setting JSON to false
	I0520 12:53:28.911902  883442 mustload.go:65] Loading cluster: ha-252263
	I0520 12:53:28.912002  883442 notify.go:220] Checking for updates...
	I0520 12:53:28.912244  883442 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:53:28.912262  883442 status.go:255] checking status of ha-252263 ...
	I0520 12:53:28.912660  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:28.912707  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:28.935654  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0520 12:53:28.936128  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:28.936806  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:28.936847  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:28.937255  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:28.937469  883442 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:53:28.939088  883442 status.go:330] ha-252263 host status = "Running" (err=<nil>)
	I0520 12:53:28.939112  883442 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:53:28.939386  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:28.939425  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:28.953892  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36943
	I0520 12:53:28.954271  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:28.954680  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:28.954695  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:28.955080  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:28.955293  883442 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:53:28.957787  883442 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:53:28.958260  883442 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:53:28.958289  883442 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:53:28.958413  883442 host.go:66] Checking if "ha-252263" exists ...
	I0520 12:53:28.958672  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:28.958702  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:28.973462  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0520 12:53:28.973875  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:28.974297  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:28.974311  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:28.974669  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:28.974881  883442 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:53:28.975089  883442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:53:28.975131  883442 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:53:28.977969  883442 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:53:28.978429  883442 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:53:28.978458  883442 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:53:28.978616  883442 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:53:28.978865  883442 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:53:28.979011  883442 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:53:28.979152  883442 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:53:29.065015  883442 ssh_runner.go:195] Run: systemctl --version
	I0520 12:53:29.073534  883442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:53:29.094918  883442 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:53:29.094960  883442 api_server.go:166] Checking apiserver status ...
	I0520 12:53:29.095006  883442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:53:29.114835  883442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5165/cgroup
	W0520 12:53:29.124741  883442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:53:29.124801  883442 ssh_runner.go:195] Run: ls
	I0520 12:53:29.129491  883442 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:53:29.133840  883442 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:53:29.133865  883442 status.go:422] ha-252263 apiserver status = Running (err=<nil>)
	I0520 12:53:29.133876  883442 status.go:257] ha-252263 status: &{Name:ha-252263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:53:29.133895  883442 status.go:255] checking status of ha-252263-m02 ...
	I0520 12:53:29.134178  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:29.134216  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:29.151517  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42175
	I0520 12:53:29.151964  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:29.152495  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:29.152515  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:29.152886  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:29.153048  883442 main.go:141] libmachine: (ha-252263-m02) Calling .GetState
	I0520 12:53:29.154427  883442 status.go:330] ha-252263-m02 host status = "Running" (err=<nil>)
	I0520 12:53:29.154443  883442 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:53:29.154840  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:29.154903  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:29.168887  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0520 12:53:29.169301  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:29.169819  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:29.169837  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:29.170150  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:29.170368  883442 main.go:141] libmachine: (ha-252263-m02) Calling .GetIP
	I0520 12:53:29.172886  883442 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:53:29.173309  883442 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:48:40 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:53:29.173335  883442 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:53:29.173509  883442 host.go:66] Checking if "ha-252263-m02" exists ...
	I0520 12:53:29.173782  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:29.173811  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:29.188074  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0520 12:53:29.188424  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:29.188971  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:29.188998  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:29.189297  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:29.189486  883442 main.go:141] libmachine: (ha-252263-m02) Calling .DriverName
	I0520 12:53:29.189701  883442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:53:29.189719  883442 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHHostname
	I0520 12:53:29.192342  883442 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:53:29.192741  883442 main.go:141] libmachine: (ha-252263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:6b", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:48:40 +0000 UTC Type:0 Mac:52:54:00:f8:3d:6b Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-252263-m02 Clientid:01:52:54:00:f8:3d:6b}
	I0520 12:53:29.192761  883442 main.go:141] libmachine: (ha-252263-m02) DBG | domain ha-252263-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:f8:3d:6b in network mk-ha-252263
	I0520 12:53:29.193048  883442 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHPort
	I0520 12:53:29.193257  883442 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHKeyPath
	I0520 12:53:29.193443  883442 main.go:141] libmachine: (ha-252263-m02) Calling .GetSSHUsername
	I0520 12:53:29.193578  883442 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m02/id_rsa Username:docker}
	I0520 12:53:29.271977  883442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:53:29.289436  883442 kubeconfig.go:125] found "ha-252263" server: "https://192.168.39.254:8443"
	I0520 12:53:29.289467  883442 api_server.go:166] Checking apiserver status ...
	I0520 12:53:29.289509  883442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:53:29.304263  883442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1435/cgroup
	W0520 12:53:29.313528  883442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1435/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 12:53:29.313576  883442 ssh_runner.go:195] Run: ls
	I0520 12:53:29.318301  883442 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 12:53:29.322610  883442 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 12:53:29.322629  883442 status.go:422] ha-252263-m02 apiserver status = Running (err=<nil>)
	I0520 12:53:29.322638  883442 status.go:257] ha-252263-m02 status: &{Name:ha-252263-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 12:53:29.322653  883442 status.go:255] checking status of ha-252263-m04 ...
	I0520 12:53:29.323003  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:29.323046  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:29.338002  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39051
	I0520 12:53:29.338463  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:29.339016  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:29.339042  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:29.339388  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:29.339586  883442 main.go:141] libmachine: (ha-252263-m04) Calling .GetState
	I0520 12:53:29.341416  883442 status.go:330] ha-252263-m04 host status = "Running" (err=<nil>)
	I0520 12:53:29.341436  883442 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:53:29.341711  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:29.341743  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:29.356043  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I0520 12:53:29.356537  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:29.357056  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:29.357076  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:29.357397  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:29.357593  883442 main.go:141] libmachine: (ha-252263-m04) Calling .GetIP
	I0520 12:53:29.360228  883442 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:53:29.360658  883442 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:50:56 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:53:29.360688  883442 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:53:29.360833  883442 host.go:66] Checking if "ha-252263-m04" exists ...
	I0520 12:53:29.361165  883442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:53:29.361202  883442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:53:29.375015  883442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0520 12:53:29.375381  883442 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:53:29.375851  883442 main.go:141] libmachine: Using API Version  1
	I0520 12:53:29.375875  883442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:53:29.376182  883442 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:53:29.376376  883442 main.go:141] libmachine: (ha-252263-m04) Calling .DriverName
	I0520 12:53:29.376584  883442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 12:53:29.376610  883442 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHHostname
	I0520 12:53:29.379256  883442 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:53:29.379753  883442 main.go:141] libmachine: (ha-252263-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:b0:71", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:50:56 +0000 UTC Type:0 Mac:52:54:00:4c:b0:71 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-252263-m04 Clientid:01:52:54:00:4c:b0:71}
	I0520 12:53:29.379788  883442 main.go:141] libmachine: (ha-252263-m04) DBG | domain ha-252263-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:4c:b0:71 in network mk-ha-252263
	I0520 12:53:29.379917  883442 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHPort
	I0520 12:53:29.380086  883442 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHKeyPath
	I0520 12:53:29.380243  883442 main.go:141] libmachine: (ha-252263-m04) Calling .GetSSHUsername
	I0520 12:53:29.380381  883442 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263-m04/id_rsa Username:docker}
	W0520 12:53:47.799071  883442 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.41:22: connect: no route to host
	W0520 12:53:47.799196  883442 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host
	E0520 12:53:47.799214  883442 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host
	I0520 12:53:47.799222  883442 status.go:257] ha-252263-m04 status: &{Name:ha-252263-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0520 12:53:47.799244  883442 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-252263 -n ha-252263
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-252263 logs -n 25: (1.673285921s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-252263 ssh -n ha-252263-m02 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04:/home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m04 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp testdata/cp-test.txt                                               | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263:/home/docker/cp-test_ha-252263-m04_ha-252263.txt                      |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263 sudo cat                                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263.txt                                |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m02:/home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m02 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m03:/home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n                                                                | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | ha-252263-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-252263 ssh -n ha-252263-m03 sudo cat                                         | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC | 20 May 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-252263 node stop m02 -v=7                                                    | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-252263 node start m02 -v=7                                                   | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-252263 -v=7                                                          | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-252263 -v=7                                                               | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-252263 --wait=true -v=7                                                   | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:46 UTC | 20 May 24 12:51 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-252263                                                               | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:51 UTC |                     |
	| node    | ha-252263 node delete m03 -v=7                                                  | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:51 UTC | 20 May 24 12:51 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-252263 stop -v=7                                                             | ha-252263 | jenkins | v1.33.1 | 20 May 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:46:50
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:46:50.893960  881185 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:46:50.894231  881185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:46:50.894241  881185 out.go:304] Setting ErrFile to fd 2...
	I0520 12:46:50.894245  881185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:46:50.894436  881185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:46:50.894994  881185 out.go:298] Setting JSON to false
	I0520 12:46:50.895949  881185 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8959,"bootTime":1716200252,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:46:50.896013  881185 start.go:139] virtualization: kvm guest
	I0520 12:46:50.898520  881185 out.go:177] * [ha-252263] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:46:50.900363  881185 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 12:46:50.900394  881185 notify.go:220] Checking for updates...
	I0520 12:46:50.902005  881185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:46:50.903776  881185 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:46:50.905142  881185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:46:50.906396  881185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:46:50.907765  881185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:46:50.909421  881185 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:46:50.909516  881185 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:46:50.910005  881185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:46:50.910090  881185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:46:50.925867  881185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0520 12:46:50.926373  881185 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:46:50.927010  881185 main.go:141] libmachine: Using API Version  1
	I0520 12:46:50.927034  881185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:46:50.927393  881185 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:46:50.927589  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:46:50.962647  881185 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 12:46:50.963848  881185 start.go:297] selected driver: kvm2
	I0520 12:46:50.963875  881185 start.go:901] validating driver "kvm2" against &{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:46:50.964080  881185 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:46:50.964427  881185 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:46:50.964507  881185 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:46:50.979394  881185 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:46:50.980093  881185 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:46:50.980124  881185 cni.go:84] Creating CNI manager for ""
	I0520 12:46:50.980132  881185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 12:46:50.980201  881185 start.go:340] cluster config:
	{Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:46:50.980381  881185 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:46:50.982109  881185 out.go:177] * Starting "ha-252263" primary control-plane node in "ha-252263" cluster
	I0520 12:46:50.983459  881185 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:46:50.983495  881185 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:46:50.983504  881185 cache.go:56] Caching tarball of preloaded images
	I0520 12:46:50.983587  881185 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:46:50.983600  881185 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:46:50.983812  881185 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/config.json ...
	I0520 12:46:50.984087  881185 start.go:360] acquireMachinesLock for ha-252263: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:46:50.984136  881185 start.go:364] duration metric: took 26.306µs to acquireMachinesLock for "ha-252263"
	I0520 12:46:50.984152  881185 start.go:96] Skipping create...Using existing machine configuration
	I0520 12:46:50.984165  881185 fix.go:54] fixHost starting: 
	I0520 12:46:50.984443  881185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:46:50.984476  881185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:46:50.998399  881185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I0520 12:46:50.998774  881185 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:46:50.999309  881185 main.go:141] libmachine: Using API Version  1
	I0520 12:46:50.999328  881185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:46:50.999634  881185 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:46:50.999802  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:46:50.999937  881185 main.go:141] libmachine: (ha-252263) Calling .GetState
	I0520 12:46:51.001405  881185 fix.go:112] recreateIfNeeded on ha-252263: state=Running err=<nil>
	W0520 12:46:51.001427  881185 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 12:46:51.003696  881185 out.go:177] * Updating the running kvm2 "ha-252263" VM ...
	I0520 12:46:51.005091  881185 machine.go:94] provisionDockerMachine start ...
	I0520 12:46:51.005110  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:46:51.005344  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.007809  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.008385  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.008412  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.008564  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.008724  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.008820  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.008967  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.009106  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:46:51.009290  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:46:51.009301  881185 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 12:46:51.123685  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263
	
	I0520 12:46:51.123716  881185 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:46:51.123935  881185 buildroot.go:166] provisioning hostname "ha-252263"
	I0520 12:46:51.123964  881185 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:46:51.124203  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.127095  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.127471  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.127498  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.127673  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.127840  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.128016  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.128173  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.128392  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:46:51.128568  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:46:51.128586  881185 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-252263 && echo "ha-252263" | sudo tee /etc/hostname
	I0520 12:46:51.250579  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-252263
	
	I0520 12:46:51.250603  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.253500  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.253972  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.254009  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.254185  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.254363  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.254577  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.254710  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.254909  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:46:51.255132  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:46:51.255154  881185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-252263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-252263/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-252263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:46:51.363737  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:46:51.363767  881185 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 12:46:51.363806  881185 buildroot.go:174] setting up certificates
	I0520 12:46:51.363816  881185 provision.go:84] configureAuth start
	I0520 12:46:51.363865  881185 main.go:141] libmachine: (ha-252263) Calling .GetMachineName
	I0520 12:46:51.364162  881185 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:46:51.366963  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.367301  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.367327  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.367436  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.369766  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.370131  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.370156  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.370279  881185 provision.go:143] copyHostCerts
	I0520 12:46:51.370305  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:46:51.370348  881185 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 12:46:51.370368  881185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 12:46:51.370435  881185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 12:46:51.370565  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:46:51.370587  881185 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 12:46:51.370591  881185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 12:46:51.370620  881185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 12:46:51.370677  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:46:51.370693  881185 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 12:46:51.370699  881185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 12:46:51.370720  881185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 12:46:51.370787  881185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.ha-252263 san=[127.0.0.1 192.168.39.182 ha-252263 localhost minikube]
	I0520 12:46:51.497594  881185 provision.go:177] copyRemoteCerts
	I0520 12:46:51.497663  881185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:46:51.497691  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.500317  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.500656  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.500681  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.500893  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.501100  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.501278  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.501403  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:46:51.586467  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 12:46:51.586538  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 12:46:51.616566  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 12:46:51.616623  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:46:51.648013  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 12:46:51.648074  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 12:46:51.676986  881185 provision.go:87] duration metric: took 313.153584ms to configureAuth
	I0520 12:46:51.677008  881185 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:46:51.677248  881185 config.go:182] Loaded profile config "ha-252263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:46:51.677346  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:46:51.680031  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.680384  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:46:51.680407  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:46:51.680580  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:46:51.680785  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.680947  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:46:51.681105  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:46:51.681282  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:46:51.681494  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:46:51.681520  881185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:48:22.583665  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:48:22.583709  881185 machine.go:97] duration metric: took 1m31.578602067s to provisionDockerMachine
	I0520 12:48:22.583731  881185 start.go:293] postStartSetup for "ha-252263" (driver="kvm2")
	I0520 12:48:22.583745  881185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:48:22.583778  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.584140  881185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:48:22.584173  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.587762  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.588226  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.588253  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.588442  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.588653  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.588833  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.588969  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:48:22.674629  881185 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:48:22.679009  881185 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:48:22.679041  881185 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 12:48:22.679135  881185 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 12:48:22.679225  881185 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 12:48:22.679249  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 12:48:22.679333  881185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:48:22.689115  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:48:22.712800  881185 start.go:296] duration metric: took 129.05594ms for postStartSetup
	I0520 12:48:22.712847  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.713161  881185 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0520 12:48:22.713197  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.715956  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.716318  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.716342  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.716553  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.716767  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.716953  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.717124  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	W0520 12:48:22.801470  881185 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0520 12:48:22.801497  881185 fix.go:56] duration metric: took 1m31.81733513s for fixHost
	I0520 12:48:22.801521  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.804311  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.804783  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.804813  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.804956  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.805132  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.805268  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.805473  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.805614  881185 main.go:141] libmachine: Using SSH client type: native
	I0520 12:48:22.805775  881185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0520 12:48:22.805785  881185 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:48:22.911486  881185 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716209302.877814695
	
	I0520 12:48:22.911505  881185 fix.go:216] guest clock: 1716209302.877814695
	I0520 12:48:22.911512  881185 fix.go:229] Guest: 2024-05-20 12:48:22.877814695 +0000 UTC Remote: 2024-05-20 12:48:22.801504925 +0000 UTC m=+91.944301839 (delta=76.30977ms)
	I0520 12:48:22.911558  881185 fix.go:200] guest clock delta is within tolerance: 76.30977ms
	I0520 12:48:22.911564  881185 start.go:83] releasing machines lock for "ha-252263", held for 1m31.92742038s
	I0520 12:48:22.911584  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.911886  881185 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:48:22.914654  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.915033  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.915075  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.915196  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.915645  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.915810  881185 main.go:141] libmachine: (ha-252263) Calling .DriverName
	I0520 12:48:22.915893  881185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:48:22.915963  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.915977  881185 ssh_runner.go:195] Run: cat /version.json
	I0520 12:48:22.916002  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHHostname
	I0520 12:48:22.918410  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.918687  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.918790  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.918814  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.918958  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.919118  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:22.919139  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.919147  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:22.919333  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHPort
	I0520 12:48:22.919357  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.919543  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	I0520 12:48:22.919559  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHKeyPath
	I0520 12:48:22.919689  881185 main.go:141] libmachine: (ha-252263) Calling .GetSSHUsername
	I0520 12:48:22.919825  881185 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/ha-252263/id_rsa Username:docker}
	W0520 12:48:23.025947  881185 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 12:48:23.026050  881185 ssh_runner.go:195] Run: systemctl --version
	I0520 12:48:23.032371  881185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:48:23.208602  881185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:48:23.217077  881185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:48:23.217137  881185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:48:23.226786  881185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 12:48:23.226803  881185 start.go:494] detecting cgroup driver to use...
	I0520 12:48:23.226880  881185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:48:23.246025  881185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:48:23.259750  881185 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:48:23.259796  881185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:48:23.274444  881185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:48:23.288860  881185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:48:23.453741  881185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:48:23.598333  881185 docker.go:233] disabling docker service ...
	I0520 12:48:23.598416  881185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:48:23.613865  881185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:48:23.627210  881185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:48:23.770585  881185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:48:23.919477  881185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:48:23.933943  881185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:48:23.953873  881185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:48:23.953937  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:23.964288  881185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:48:23.964356  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:23.974488  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:23.984368  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:23.994632  881185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:48:24.005240  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:24.015670  881185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:24.026882  881185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:48:24.037218  881185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:48:24.046540  881185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:48:24.055797  881185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:48:24.199072  881185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:48:28.943836  881185 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.744727686s)
	I0520 12:48:28.943867  881185 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:48:28.943919  881185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:48:28.949096  881185 start.go:562] Will wait 60s for crictl version
	I0520 12:48:28.949159  881185 ssh_runner.go:195] Run: which crictl
	I0520 12:48:28.953257  881185 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:48:28.994397  881185 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:48:28.994486  881185 ssh_runner.go:195] Run: crio --version
	I0520 12:48:29.024547  881185 ssh_runner.go:195] Run: crio --version
	I0520 12:48:29.054964  881185 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 12:48:29.056449  881185 main.go:141] libmachine: (ha-252263) Calling .GetIP
	I0520 12:48:29.059069  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:29.059513  881185 main.go:141] libmachine: (ha-252263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6e:b0", ip: ""} in network mk-ha-252263: {Iface:virbr1 ExpiryTime:2024-05-20 13:37:09 +0000 UTC Type:0 Mac:52:54:00:44:6e:b0 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-252263 Clientid:01:52:54:00:44:6e:b0}
	I0520 12:48:29.059539  881185 main.go:141] libmachine: (ha-252263) DBG | domain ha-252263 has defined IP address 192.168.39.182 and MAC address 52:54:00:44:6e:b0 in network mk-ha-252263
	I0520 12:48:29.059715  881185 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 12:48:29.064789  881185 kubeadm.go:877] updating cluster {Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 12:48:29.064930  881185 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:48:29.064975  881185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:48:29.106059  881185 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:48:29.106086  881185 crio.go:433] Images already preloaded, skipping extraction
	I0520 12:48:29.106135  881185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 12:48:29.138232  881185 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 12:48:29.138259  881185 cache_images.go:84] Images are preloaded, skipping loading
	I0520 12:48:29.138271  881185 kubeadm.go:928] updating node { 192.168.39.182 8443 v1.30.1 crio true true} ...
	I0520 12:48:29.138437  881185 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-252263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 12:48:29.138523  881185 ssh_runner.go:195] Run: crio config
	I0520 12:48:29.192092  881185 cni.go:84] Creating CNI manager for ""
	I0520 12:48:29.192112  881185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 12:48:29.192134  881185 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 12:48:29.192157  881185 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-252263 NodeName:ha-252263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 12:48:29.192379  881185 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-252263"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 12:48:29.192407  881185 kube-vip.go:115] generating kube-vip config ...
	I0520 12:48:29.192457  881185 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 12:48:29.203947  881185 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 12:48:29.204069  881185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 12:48:29.204130  881185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 12:48:29.213329  881185 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 12:48:29.213394  881185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 12:48:29.222198  881185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 12:48:29.238445  881185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 12:48:29.254900  881185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 12:48:29.271051  881185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 12:48:29.287060  881185 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 12:48:29.292041  881185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:48:29.440333  881185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 12:48:29.456304  881185 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263 for IP: 192.168.39.182
	I0520 12:48:29.456330  881185 certs.go:194] generating shared ca certs ...
	I0520 12:48:29.456347  881185 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:48:29.456516  881185 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 12:48:29.456558  881185 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 12:48:29.456567  881185 certs.go:256] generating profile certs ...
	I0520 12:48:29.456645  881185 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/client.key
	I0520 12:48:29.456671  881185 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.65e505f9
	I0520 12:48:29.456686  881185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.65e505f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182 192.168.39.22 192.168.39.60 192.168.39.254]
	I0520 12:48:29.578478  881185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.65e505f9 ...
	I0520 12:48:29.578511  881185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.65e505f9: {Name:mk4a184bdb7fba968ea974df92ad467368b653b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:48:29.578706  881185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.65e505f9 ...
	I0520 12:48:29.578725  881185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.65e505f9: {Name:mkec6a6258c44021fe39dc047dee8a55418c7ba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:48:29.578822  881185 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt.65e505f9 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt
	I0520 12:48:29.579023  881185 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key.65e505f9 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key
	I0520 12:48:29.579171  881185 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key
	I0520 12:48:29.579188  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 12:48:29.579199  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 12:48:29.579209  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 12:48:29.579219  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 12:48:29.579232  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 12:48:29.579242  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 12:48:29.579254  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 12:48:29.579267  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 12:48:29.579309  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 12:48:29.579347  881185 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 12:48:29.579356  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 12:48:29.579375  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 12:48:29.579395  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 12:48:29.579414  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 12:48:29.579449  881185 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 12:48:29.579475  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 12:48:29.579489  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 12:48:29.579503  881185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:48:29.580076  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 12:48:29.605586  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 12:48:29.629616  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 12:48:29.653697  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 12:48:29.676393  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 12:48:29.699916  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 12:48:29.723056  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 12:48:29.745949  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/ha-252263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 12:48:29.772023  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 12:48:29.794457  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 12:48:29.817502  881185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 12:48:29.841298  881185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 12:48:29.857576  881185 ssh_runner.go:195] Run: openssl version
	I0520 12:48:29.863454  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 12:48:29.874264  881185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 12:48:29.878949  881185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 12:48:29.878988  881185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 12:48:29.884807  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 12:48:29.893765  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 12:48:29.904091  881185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 12:48:29.908416  881185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 12:48:29.908455  881185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 12:48:29.914048  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 12:48:29.923246  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 12:48:29.933467  881185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:48:29.937820  881185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:48:29.937862  881185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 12:48:29.943414  881185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 12:48:29.952311  881185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 12:48:29.957009  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 12:48:29.962677  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 12:48:29.968138  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 12:48:29.973525  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 12:48:29.978870  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 12:48:29.984636  881185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 12:48:29.990202  881185 kubeadm.go:391] StartCluster: {Name:ha-252263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-252263 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:48:29.990304  881185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 12:48:29.990340  881185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 12:48:30.029695  881185 cri.go:89] found id: "d7a6fad8a75b9788b12befa54d77c18ee2510c7bad67a78381dfb14bfb61654c"
	I0520 12:48:30.029724  881185 cri.go:89] found id: "a7b500ca4a5ff4b26d4c450d219ed21171a47cd6937d3a9c5cee7c7c90214ff2"
	I0520 12:48:30.029730  881185 cri.go:89] found id: "8d83655ac29f13b80a76832615408f83141c2476915f4ce562a635f00c84b477"
	I0520 12:48:30.029737  881185 cri.go:89] found id: "49c278c418300797d23288d9dcf4ec027b7fa754b2869d4a360d8e196c2fcc5e"
	I0520 12:48:30.029741  881185 cri.go:89] found id: "0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a"
	I0520 12:48:30.029746  881185 cri.go:89] found id: "81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7"
	I0520 12:48:30.029750  881185 cri.go:89] found id: "f4931bfff375c6d9f4dab0d3c616c5ba37eb42803822e6808a846d23c0eb3353"
	I0520 12:48:30.029753  881185 cri.go:89] found id: "8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e"
	I0520 12:48:30.029757  881185 cri.go:89] found id: "8e7cb9bc2927761dad6889642239677c41fd361371fb7396c4b8590ae45ddad9"
	I0520 12:48:30.029778  881185 cri.go:89] found id: "78352b69293ae63c1b3985c05008d097d4a52958942e15130e0e6d5b8357e4bf"
	I0520 12:48:30.029788  881185 cri.go:89] found id: "8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290"
	I0520 12:48:30.029792  881185 cri.go:89] found id: "38216273b9bc6519421464997419c27626a1b14f4ce50b754efdadebb42e0257"
	I0520 12:48:30.029797  881185 cri.go:89] found id: "57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b"
	I0520 12:48:30.029804  881185 cri.go:89] found id: ""
	I0520 12:48:30.029858  881185 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.443814191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209628443794608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34623776-ec98-4701-aadd-953b65e0c04b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.444387667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1a24c12-cad0-450e-8bd3-80ffc811750c name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.444454910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1a24c12-cad0-450e-8bd3-80ffc811750c name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.445108904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55688fae5ad571a2951d009a710fdd76606eed7d23f1a4d34088028e5cdfa8a4,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209397950618168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716209390940067511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209359940574305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209357950417352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716209355943478973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129f71aae7a20f88722fa5b23d17d7c8c5e42a6c5f7285856acf009dcaed3577,PodSandboxId:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716209349209494114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76eb61ab14b8302988f449747a7e1c3b0dd0b1e09b0b53dbb5a79a84aa238731,PodSandboxId:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716209330142138803,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127,PodSandboxId:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209316309194383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a2e85d5
f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e,PodSandboxId:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316010461868,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f,PodSandboxId:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316001562993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792,PodSandboxId:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209315833466317,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2,PodSandboxId:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209315774510078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9
f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716209315704203583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789
145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716209315717630530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716209310598131705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kuber
netes.container.hash: 195c0558,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716208821244678391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kuberne
tes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674333448654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674327197782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716208672039078656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716208651871813725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716208651761199232,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1a24c12-cad0-450e-8bd3-80ffc811750c name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.494534459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6300201b-5a21-4f5d-8e61-edd7cfeca0dc name=/runtime.v1.RuntimeService/Version
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.494606527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6300201b-5a21-4f5d-8e61-edd7cfeca0dc name=/runtime.v1.RuntimeService/Version
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.495795068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95f5f19f-b764-4682-8816-bf3ac4ac4bc3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.496317945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209628496292930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95f5f19f-b764-4682-8816-bf3ac4ac4bc3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.496774661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4296d59a-9e30-4a5f-b86c-c96166b22087 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.496856738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4296d59a-9e30-4a5f-b86c-c96166b22087 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.497571372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55688fae5ad571a2951d009a710fdd76606eed7d23f1a4d34088028e5cdfa8a4,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209397950618168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716209390940067511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209359940574305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209357950417352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716209355943478973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129f71aae7a20f88722fa5b23d17d7c8c5e42a6c5f7285856acf009dcaed3577,PodSandboxId:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716209349209494114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76eb61ab14b8302988f449747a7e1c3b0dd0b1e09b0b53dbb5a79a84aa238731,PodSandboxId:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716209330142138803,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127,PodSandboxId:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209316309194383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a2e85d5
f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e,PodSandboxId:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316010461868,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f,PodSandboxId:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316001562993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792,PodSandboxId:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209315833466317,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2,PodSandboxId:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209315774510078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9
f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716209315704203583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789
145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716209315717630530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716209310598131705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kuber
netes.container.hash: 195c0558,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716208821244678391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kuberne
tes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674333448654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674327197782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716208672039078656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716208651871813725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716208651761199232,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4296d59a-9e30-4a5f-b86c-c96166b22087 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.545588351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e33f80d-7bfd-45e5-be4e-1c963d3f092c name=/runtime.v1.RuntimeService/Version
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.545680061Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e33f80d-7bfd-45e5-be4e-1c963d3f092c name=/runtime.v1.RuntimeService/Version
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.546668823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c00eb79-29cc-4702-91a6-c94bcdca173b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.547171080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209628547147946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c00eb79-29cc-4702-91a6-c94bcdca173b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.547753275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24a3fac4-78fb-41a8-931f-625d0d9f7284 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.547877077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24a3fac4-78fb-41a8-931f-625d0d9f7284 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.549022134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55688fae5ad571a2951d009a710fdd76606eed7d23f1a4d34088028e5cdfa8a4,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209397950618168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716209390940067511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209359940574305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209357950417352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716209355943478973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129f71aae7a20f88722fa5b23d17d7c8c5e42a6c5f7285856acf009dcaed3577,PodSandboxId:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716209349209494114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76eb61ab14b8302988f449747a7e1c3b0dd0b1e09b0b53dbb5a79a84aa238731,PodSandboxId:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716209330142138803,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127,PodSandboxId:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209316309194383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a2e85d5
f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e,PodSandboxId:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316010461868,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f,PodSandboxId:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316001562993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792,PodSandboxId:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209315833466317,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2,PodSandboxId:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209315774510078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9
f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716209315704203583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789
145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716209315717630530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716209310598131705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kuber
netes.container.hash: 195c0558,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716208821244678391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kuberne
tes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674333448654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674327197782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716208672039078656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716208651871813725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716208651761199232,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24a3fac4-78fb-41a8-931f-625d0d9f7284 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.596070829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ae5bf6d-fcd8-451a-8918-a40f4ca92cb5 name=/runtime.v1.RuntimeService/Version
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.596167203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ae5bf6d-fcd8-451a-8918-a40f4ca92cb5 name=/runtime.v1.RuntimeService/Version
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.597094935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae1b30c0-bd91-4ea2-a8c0-c56dfc9f50a3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.597509594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716209628597484405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae1b30c0-bd91-4ea2-a8c0-c56dfc9f50a3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.598029671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64238444-e712-4d26-a9dd-515d6601869d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.598098935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64238444-e712-4d26-a9dd-515d6601869d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:53:48 ha-252263 crio[3781]: time="2024-05-20 12:53:48.598492339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55688fae5ad571a2951d009a710fdd76606eed7d23f1a4d34088028e5cdfa8a4,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716209397950618168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716209390940067511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kubernetes.container.hash: 195c0558,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716209359940574305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Annotations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716209357950417352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf,PodSandboxId:b40196f493a75600f27a83a21e2565a4e846746e7e086d3b00d30792e854b853,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716209355943478973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db18dbf-710f-4c10-84bb-c5120c865740,},Annotations:map[string]string{io.kubernetes.container.hash: 7b8772d4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129f71aae7a20f88722fa5b23d17d7c8c5e42a6c5f7285856acf009dcaed3577,PodSandboxId:8d0e14d1097073ac4c8476fb550265be7204d0ab73de85c6deeb801987d6fd5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716209349209494114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kubernetes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76eb61ab14b8302988f449747a7e1c3b0dd0b1e09b0b53dbb5a79a84aa238731,PodSandboxId:af352eb3fc18694d8788b404aec100927cb2c2417102ba657d37e1daa55a8131,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716209330142138803,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab810d379e9444cc018c95e07377fd96,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127,PodSandboxId:79174bbdb164d4e6340669c3d635c1cbe76bf42edbf2def7f7b65af81df9624f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716209316309194383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a2e85d5
f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e,PodSandboxId:0922184556c5d964f56750b28316c6fc12f267e5443718dfc74f3c4655e35d70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316010461868,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f,PodSandboxId:acb60c855b09253f262a98f4d57253f0ad7e4f10d424d906ce5b953c06e287e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716209316001562993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792,PodSandboxId:cccebdc1b35d50a408cc3a5ecf48926eb649235a4c5c51170f935d3248b976fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716209315833466317,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2,PodSandboxId:9b343fc81ed0f96547baa065b67f0d8b1fd51846cdb03629b530825558cfd5ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716209315774510078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9
f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2,PodSandboxId:8b14fedca25acdfcff55a4004456962a3e992022a847ce4920ae42683f5a2291,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716209315704203583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a55b737ed9f789
145db5fccf1c1af9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1,PodSandboxId:cb0fd61b6b9479d267b460852bab324d9f5d3e4b1657a718d99b293e3a710144,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716209315717630530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a203f8e0978c311771fe427cfc08bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: d0f936cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127,PodSandboxId:402b31683e2d31383f565e1aceb4d920da3dab55d53ffa6b57c304fd3ad56d63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716209310598131705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8vkjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b222e7ad-6005-42bf-867f-40b94d584782,},Annotations:map[string]string{io.kuber
netes.container.hash: 195c0558,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb77a13cb639909f22fd17991102a85f29a652f67ff36310aeb0a4fb7b1bc46,PodSandboxId:e3f7317af104fff75258e47993629ace39252506c9b07d77d3ee0de0d4f8e211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716208821244678391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vdgxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57097c7d-bdee-48f4-8736-264f6cfaee92,},Annotations:map[string]string{io.kuberne
tes.container.hash: f46ec96d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a,PodSandboxId:8217c5dc10b50672925df0bef2f089790b80a93829f275e8056229c3295ab2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674333448654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5fa83f0-abaa-4c78-8d08-124503934fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 14ecf081,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7,PodSandboxId:43b0b303d8ecf72b309d0be4c4fd2234ae68ec4a6f62ad836ef54bb7d26c00f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716208674327197782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-96h5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2d9323-6e0b-4a50-b834-2fd9b3c74bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 4403ef97,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e,PodSandboxId:85f3c6afc77a51ec807d74d350840358503ffd0e2b7a433379776ca53aaaf3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716208672039078656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5zvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9f5f1f-60ac-4567-8d5c-b2de0404623f,},Annotations:map[string]string{io.kubernetes.container.hash: f24d6035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290,PodSandboxId:e9f3670ad0515b9eb115555943d4beb0426efc88f425cd2f46d5a5b3d85aad51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716208651871813725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140ef0230d166f054d4e1035bde09336,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b,PodSandboxId:9dcb3183f7b71ce5a97acccd3fc3b88f7a117ba05c51332993aa0d81bc9960f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716208651761199232,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-252263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c625499e3affdd6ad46b9f9df2e2d950,},Annotations:map[string]string{io.kubernetes.container.hash: 3af22afc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64238444-e712-4d26-a9dd-515d6601869d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55688fae5ad57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   b40196f493a75       storage-provisioner
	0c1f331e32feb       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               3                   402b31683e2d3       kindnet-8vkjc
	1779ba907d699       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      4 minutes ago       Running             kube-apiserver            3                   cb0fd61b6b947       kube-apiserver-ha-252263
	bbdb833df0479       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      4 minutes ago       Running             kube-controller-manager   2                   8b14fedca25ac       kube-controller-manager-ha-252263
	ea90ef3e02cff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   b40196f493a75       storage-provisioner
	129f71aae7a20       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   8d0e14d109707       busybox-fc5497c4f-vdgxd
	76eb61ab14b83       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   af352eb3fc186       kube-vip-ha-252263
	ece57eb718aac       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                1                   79174bbdb164d       kube-proxy-z5zvt
	3a2e85d5f6d40       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   0922184556c5d       coredns-7db6d8ff4d-96h5w
	b1bfad9b3a0b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   acb60c855b092       coredns-7db6d8ff4d-c2vkj
	a527eb856411d       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      5 minutes ago       Running             kube-scheduler            1                   cccebdc1b35d5       kube-scheduler-ha-252263
	daebcc18593c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   9b343fc81ed0f       etcd-ha-252263
	3994d5ac68b46       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      5 minutes ago       Exited              kube-apiserver            2                   cb0fd61b6b947       kube-apiserver-ha-252263
	d506292b9f275       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago       Exited              kube-controller-manager   1                   8b14fedca25ac       kube-controller-manager-ha-252263
	f0fa157b9750d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   402b31683e2d3       kindnet-8vkjc
	7fb77a13cb639       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   e3f7317af104f       busybox-fc5497c4f-vdgxd
	0aaaa2c2d0a2a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   8217c5dc10b50       coredns-7db6d8ff4d-c2vkj
	81df7a9501142       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   43b0b303d8ecf       coredns-7db6d8ff4d-96h5w
	8481a0a858b8f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      15 minutes ago      Exited              kube-proxy                0                   85f3c6afc77a5       kube-proxy-z5zvt
	8516a1fdea0a5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      16 minutes ago      Exited              kube-scheduler            0                   e9f3670ad0515       kube-scheduler-ha-252263
	57b99e90b3f2c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   9dcb3183f7b71       etcd-ha-252263
	
	
	==> coredns [0aaaa2c2d0a2a27237b92b04453cf84d8a66369986c072798db4f5b0ce1bfc6a] <==
	[INFO] 10.244.2.2:33816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646431s
	[INFO] 10.244.2.2:35739 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000262525s
	[INFO] 10.244.2.2:38598 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158046s
	[INFO] 10.244.2.2:58591 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129009s
	[INFO] 10.244.2.2:42154 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077099s
	[INFO] 10.244.1.2:55966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236408s
	[INFO] 10.244.1.2:38116 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165417s
	[INFO] 10.244.1.2:42765 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013421s
	[INFO] 10.244.0.4:43917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087757s
	[INFO] 10.244.2.2:39196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131607s
	[INFO] 10.244.2.2:53256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139178s
	[INFO] 10.244.2.2:51674 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089462s
	[INFO] 10.244.2.2:49072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088789s
	[INFO] 10.244.1.2:56181 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013731s
	[INFO] 10.244.1.2:41238 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121064s
	[INFO] 10.244.0.4:51538 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100171s
	[INFO] 10.244.2.2:59762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112653s
	[INFO] 10.244.2.2:48400 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080614s
	[INFO] 10.244.1.2:54360 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166063s
	[INFO] 10.244.1.2:51350 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071222s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1844&timeout=6m41s&timeoutSeconds=401&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1844&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1844&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3a2e85d5f6d40132cd07a8528fdcee3c6884255d3b84564df27db35a0045069e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46806->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46806->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [81df7a9501142bd1a7b8159dbfc2cf2060325a6d10d0dd3484e8693e93bc0ac7] <==
	[INFO] 10.244.1.2:51684 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008851s
	[INFO] 10.244.1.2:37865 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122394s
	[INFO] 10.244.0.4:41864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103464s
	[INFO] 10.244.0.4:48776 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078784s
	[INFO] 10.244.0.4:50703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060251s
	[INFO] 10.244.1.2:44802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115237s
	[INFO] 10.244.1.2:33948 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012433s
	[INFO] 10.244.0.4:54781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008753s
	[INFO] 10.244.0.4:54168 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243725s
	[INFO] 10.244.0.4:60539 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140289s
	[INFO] 10.244.2.2:37865 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093682s
	[INFO] 10.244.2.2:38339 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116317s
	[INFO] 10.244.1.2:44551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117883s
	[INFO] 10.244.1.2:42004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008187s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1788": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1788": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b1bfad9b3a0b98df209e0afeaf31fdb3241c1e0c968335299ab913c529a7db8f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[776725303]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 12:48:47.556) (total time: 11492ms):
	Trace[776725303]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37466->10.96.0.1:443: read: connection reset by peer 11492ms (12:48:59.049)
	Trace[776725303]: [11.492842619s] [11.492842619s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-252263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T12_37_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:53:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:49:18 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:49:18 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:49:18 +0000   Mon, 20 May 2024 12:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:49:18 +0000   Mon, 20 May 2024 12:37:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-252263
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 35935ea8555a4df9a418abd1fd7734ca
	  System UUID:                35935ea8-555a-4df9-a418-abd1fd7734ca
	  Boot ID:                    96326bcd-6af4-4e73-8e52-8d2d55c0ef49
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vdgxd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-96h5w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-c2vkj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-252263                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-8vkjc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-252263             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-252263    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-z5zvt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-252263             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-252263                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m30s  kube-proxy       
	  Normal   Starting                 15m    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-252263 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-252263 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-252263 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-252263 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m    node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal   NodeReady                15m    kubelet          Node ha-252263 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Warning  ContainerGCFailed        6m12s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m24s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal   RegisteredNode           4m15s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	  Normal   RegisteredNode           3m12s  node-controller  Node ha-252263 event: Registered Node ha-252263 in Controller
	
	
	Name:               ha-252263-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_38_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:38:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:53:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:50:01 +0000   Mon, 20 May 2024 12:49:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:50:01 +0000   Mon, 20 May 2024 12:49:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:50:01 +0000   Mon, 20 May 2024 12:49:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:50:01 +0000   Mon, 20 May 2024 12:49:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-252263-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 39c8edfb8be441aab0eaa91516d89ad1
	  System UUID:                39c8edfb-8be4-41aa-b0ea-a91516d89ad1
	  Boot ID:                    c0a161fe-111b-4df4-b1a3-a438fa28cf3b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xqdrj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-252263-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-lfz72                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-252263-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-252263-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-84x7f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-252263-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-252263-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-252263-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-252263-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-252263-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-252263-m02 status is now: NodeNotReady
	  Normal  Starting                 4m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node ha-252263-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node ha-252263-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node ha-252263-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-252263-m02 event: Registered Node ha-252263-m02 in Controller
	
	
	Name:               ha-252263-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-252263-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=ha-252263
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T12_40_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:40:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-252263-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:51:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 12:51:02 +0000   Mon, 20 May 2024 12:52:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 12:51:02 +0000   Mon, 20 May 2024 12:52:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 12:51:02 +0000   Mon, 20 May 2024 12:52:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 12:51:02 +0000   Mon, 20 May 2024 12:52:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-252263-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e01b8d01b7b3442aafbd1460443cc06b
	  System UUID:                e01b8d01-b7b3-442a-afbd-1460443cc06b
	  Boot ID:                    58c85148-8788-44b4-9405-a7fc7e26d1ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-svgj6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-5st4d              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-gww58           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-252263-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-252263-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-252263-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-252263-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m24s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   NodeNotReady             3m44s                  node-controller  Node ha-252263-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-252263-m04 event: Registered Node ha-252263-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x3 over 2m47s)  kubelet          Node ha-252263-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x3 over 2m47s)  kubelet          Node ha-252263-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x3 over 2m47s)  kubelet          Node ha-252263-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s (x2 over 2m47s)  kubelet          Node ha-252263-m04 has been rebooted, boot id: 58c85148-8788-44b4-9405-a7fc7e26d1ce
	  Normal   NodeReady                2m47s (x2 over 2m47s)  kubelet          Node ha-252263-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-252263-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.720517] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.056941] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063479] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.182637] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.137786] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261133] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.100200] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.178110] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.059165] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.929456] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.070241] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.174384] kauditd_printk_skb: 21 callbacks suppressed
	[May20 12:38] kauditd_printk_skb: 74 callbacks suppressed
	[May20 12:48] systemd-fstab-generator[3701]: Ignoring "noauto" option for root device
	[  +0.145232] systemd-fstab-generator[3713]: Ignoring "noauto" option for root device
	[  +0.172934] systemd-fstab-generator[3727]: Ignoring "noauto" option for root device
	[  +0.147920] systemd-fstab-generator[3739]: Ignoring "noauto" option for root device
	[  +0.278787] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +5.238925] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[  +0.084712] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.997490] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.164477] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.056224] kauditd_printk_skb: 1 callbacks suppressed
	[May20 12:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [57b99e90b3f2c39677e85ab90dbc5283f1bb14767c54b64c537af8525b2f342b] <==
	2024/05/20 12:46:51 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T12:46:51.821116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"993.905199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-05-20T12:46:51.821126Z","caller":"traceutil/trace.go:171","msg":"trace[120374282] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"993.925998ms","start":"2024-05-20T12:46:50.827198Z","end":"2024-05-20T12:46:51.821124Z","steps":["trace[120374282] 'agreement among raft nodes before linearized reading'  (duration: 993.914319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:46:51.821138Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:46:50.827194Z","time spent":"993.940422ms","remote":"127.0.0.1:49620","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" limit:10000 "}
	2024/05/20 12:46:51 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T12:46:51.893337Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.182:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T12:46:51.893386Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.182:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T12:46:51.893459Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"50ad4904f737d679","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-20T12:46:51.893654Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893684Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893726Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893773Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893815Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893864Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.893875Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bc7256b09c5d993a"}
	{"level":"info","ts":"2024-05-20T12:46:51.89388Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.893889Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.893991Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.894046Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.894093Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.894118Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.894145Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:46:51.896434Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.182:2380"}
	{"level":"info","ts":"2024-05-20T12:46:51.896679Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.182:2380"}
	{"level":"info","ts":"2024-05-20T12:46:51.896711Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-252263","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.182:2380"],"advertise-client-urls":["https://192.168.39.182:2379"]}
	
	
	==> etcd [daebcc18593c3529154e4403056bd79c0b10c1d4ccda1bcb37f66e9611704cd2] <==
	{"level":"info","ts":"2024-05-20T12:50:19.878968Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:50:19.888799Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:50:19.929306Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"50ad4904f737d679","to":"30f45f742d7f2ecf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-20T12:50:19.929361Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:50:19.945499Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"50ad4904f737d679","to":"30f45f742d7f2ecf","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-20T12:50:19.945714Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"warn","ts":"2024-05-20T12:51:14.731486Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.60:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-05-20T12:51:14.747518Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.60:34510","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-05-20T12:51:14.779377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50ad4904f737d679 switched to configuration voters=(5813382979681506937 13579011143013079354)"}
	{"level":"info","ts":"2024-05-20T12:51:14.781636Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"c3ca243f487c5ef6","local-member-id":"50ad4904f737d679","removed-remote-peer-id":"30f45f742d7f2ecf","removed-remote-peer-urls":["https://192.168.39.60:2380"]}
	{"level":"info","ts":"2024-05-20T12:51:14.781786Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"warn","ts":"2024-05-20T12:51:14.782197Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:51:14.782286Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"warn","ts":"2024-05-20T12:51:14.782858Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:51:14.783017Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:51:14.783247Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"warn","ts":"2024-05-20T12:51:14.783481Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf","error":"context canceled"}
	{"level":"warn","ts":"2024-05-20T12:51:14.783552Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"30f45f742d7f2ecf","error":"failed to read 30f45f742d7f2ecf on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-05-20T12:51:14.783605Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"warn","ts":"2024-05-20T12:51:14.783775Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-05-20T12:51:14.783848Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"50ad4904f737d679","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:51:14.783952Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"info","ts":"2024-05-20T12:51:14.783999Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"50ad4904f737d679","removed-remote-peer-id":"30f45f742d7f2ecf"}
	{"level":"warn","ts":"2024-05-20T12:51:14.795327Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"50ad4904f737d679","remote-peer-id-stream-handler":"50ad4904f737d679","remote-peer-id-from":"30f45f742d7f2ecf"}
	{"level":"warn","ts":"2024-05-20T12:51:14.805774Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.60:35336","server-name":"","error":"read tcp 192.168.39.182:2380->192.168.39.60:35336: read: connection reset by peer"}
	
	
	==> kernel <==
	 12:53:49 up 16 min,  0 users,  load average: 0.17, 0.24, 0.21
	Linux ha-252263 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0c1f331e32feb944a38f046c992d761292714651f3f2c6849bbf6620ea48cccd] <==
	I0520 12:53:02.041835       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:53:12.057235       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:53:12.057418       1 main.go:227] handling current node
	I0520 12:53:12.057484       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:53:12.057519       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:53:12.057663       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:53:12.057707       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:53:22.071644       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:53:22.071749       1 main.go:227] handling current node
	I0520 12:53:22.071778       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:53:22.071796       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:53:22.071975       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:53:22.072005       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:53:32.078618       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:53:32.078781       1 main.go:227] handling current node
	I0520 12:53:32.078809       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:53:32.078828       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:53:32.079023       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:53:32.079052       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	I0520 12:53:42.090802       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0520 12:53:42.091004       1 main.go:227] handling current node
	I0520 12:53:42.091040       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0520 12:53:42.091060       1 main.go:250] Node ha-252263-m02 has CIDR [10.244.1.0/24] 
	I0520 12:53:42.091225       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0520 12:53:42.091247       1 main.go:250] Node ha-252263-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127] <==
	I0520 12:48:31.064626       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0520 12:48:31.064774       1 main.go:107] hostIP = 192.168.39.182
	podIP = 192.168.39.182
	I0520 12:48:31.065029       1 main.go:116] setting mtu 1500 for CNI 
	I0520 12:48:31.065078       1 main.go:146] kindnetd IP family: "ipv4"
	I0520 12:48:31.065112       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0520 12:48:31.363404       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0520 12:48:34.473345       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 12:48:37.545353       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 12:48:40.617491       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 12:48:53.629512       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [1779ba907d6994d11f9a45e625376b59d1028391cb206e425109a32a70922b79] <==
	I0520 12:49:21.762083       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 12:49:21.813829       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 12:49:21.821271       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 12:49:21.821307       1 policy_source.go:224] refreshing policies
	I0520 12:49:21.830543       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 12:49:21.852297       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 12:49:21.853842       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 12:49:21.854155       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 12:49:21.855055       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 12:49:21.855236       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 12:49:21.855322       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 12:49:21.862529       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 12:49:21.865322       1 aggregator.go:165] initial CRD sync complete...
	I0520 12:49:21.866020       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 12:49:21.866583       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 12:49:21.866632       1 cache.go:39] Caches are synced for autoregister controller
	I0520 12:49:21.865971       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0520 12:49:21.873845       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.22 192.168.39.60]
	I0520 12:49:21.875144       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:49:21.884952       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0520 12:49:21.888038       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0520 12:49:22.759161       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0520 12:49:23.108874       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.22 192.168.39.60]
	W0520 12:49:33.110505       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.22]
	W0520 12:51:23.114390       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.182 192.168.39.22]
	
	
	==> kube-apiserver [3994d5ac68b46bb2ce2334369ac22a5c8a183617d5d7fd8267efc7fa2c2a00d1] <==
	I0520 12:48:36.521847       1 options.go:221] external host was not specified, using 192.168.39.182
	I0520 12:48:36.526282       1 server.go:148] Version: v1.30.1
	I0520 12:48:36.526330       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:48:37.045856       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0520 12:48:37.046134       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0520 12:48:37.046293       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0520 12:48:37.046449       1 instance.go:299] Using reconciler: lease
	I0520 12:48:37.046184       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0520 12:48:57.044273       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0520 12:48:57.044335       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0520 12:48:57.047806       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [bbdb833df0479d9baa1bd879bd91885822eb83dad3a7e1bfa9fa0facd04a3853] <==
	E0520 12:51:54.165476       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	E0520 12:51:54.165599       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	E0520 12:51:54.165673       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	E0520 12:51:54.165720       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	I0520 12:52:04.309157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.778526ms"
	I0520 12:52:04.309288       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.213µs"
	E0520 12:52:14.166371       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	E0520 12:52:14.166415       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	E0520 12:52:14.166423       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	E0520 12:52:14.166428       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	E0520 12:52:14.166432       1 gc_controller.go:153] "Failed to get node" err="node \"ha-252263-m03\" not found" logger="pod-garbage-collector-controller" node="ha-252263-m03"
	I0520 12:52:14.177873       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-252263-m03"
	I0520 12:52:14.215295       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-252263-m03"
	I0520 12:52:14.215420       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-252263-m03"
	I0520 12:52:14.242546       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-252263-m03"
	I0520 12:52:14.242644       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-c8zs5"
	I0520 12:52:14.274577       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-c8zs5"
	I0520 12:52:14.274753       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-252263-m03"
	I0520 12:52:14.298633       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-252263-m03"
	I0520 12:52:14.298684       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-d67g2"
	I0520 12:52:14.337225       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-d67g2"
	I0520 12:52:14.337379       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-252263-m03"
	I0520 12:52:14.364843       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-252263-m03"
	I0520 12:52:14.364999       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-252263-m03"
	I0520 12:52:14.393413       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-252263-m03"
	
	
	==> kube-controller-manager [d506292b9f2755d91068a92c3c25de5719c010a40331d001c0ff7db6fadb1db2] <==
	I0520 12:48:36.881521       1 serving.go:380] Generated self-signed cert in-memory
	I0520 12:48:37.299070       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 12:48:37.299156       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:48:37.300663       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 12:48:37.301013       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 12:48:37.301191       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0520 12:48:37.301340       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0520 12:48:58.052731       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.182:8443/healthz\": dial tcp 192.168.39.182:8443: connect: connection refused"
	
	
	==> kube-proxy [8481a0a858b8f8930761252ea3ec5c725dd156a897b9a75a1f3be1ddd232534e] <==
	E0520 12:45:42.634170       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:45.707254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:45.707448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:45.707608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:45.707552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:45.707685       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:45.707778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:51.850221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:51.850687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:51.850809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:51.851022       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:45:51.850955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:45:51.851107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:01.066377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:01.066523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:04.138835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:04.139297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:04.139462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:04.139522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:19.497253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:19.497753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1842": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:22.570109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:22.570347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 12:46:22.570471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 12:46:22.570513       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-252263&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [ece57eb718aac59d2f1cf6ac9aca0b1035e4b3714adf024029120432f858b127] <==
	I0520 12:48:37.319815       1 server_linux.go:69] "Using iptables proxy"
	E0520 12:48:37.738050       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 12:48:40.810880       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 12:48:43.881357       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 12:48:50.026972       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 12:49:02.313996       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-252263\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0520 12:49:18.772625       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.182"]
	I0520 12:49:18.833757       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:49:18.833847       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:49:18.833877       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:49:18.836741       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:49:18.837141       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:49:18.837357       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:49:18.838871       1 config.go:192] "Starting service config controller"
	I0520 12:49:18.838967       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:49:18.839015       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:49:18.839033       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:49:18.839637       1 config.go:319] "Starting node config controller"
	I0520 12:49:18.839674       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:49:18.939991       1 shared_informer.go:320] Caches are synced for node config
	I0520 12:49:18.940081       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:49:18.940136       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8516a1fdea0a59d3e9c38feefaee45d223b114dae4aa8eae1b5be53231f70290] <==
	W0520 12:46:48.577796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:46:48.577942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:46:48.697680       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 12:46:48.697780       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 12:46:48.721814       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:46:48.721953       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:46:48.782494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:46:48.782640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 12:46:48.807110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:46:48.807189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:46:48.918808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 12:46:48.918968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 12:46:49.055551       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:46:49.055715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:46:49.803504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:46:49.803554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:46:50.124720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:46:50.124808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:46:50.219194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:46:50.219288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:46:50.309520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:46:50.309552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:46:50.940146       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 12:46:50.940261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 12:46:51.803280       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a527eb856411d4aba7319eeb2dc863a594a9a5b9a7dbf5fe3b7be97828a14792] <==
	W0520 12:49:15.555413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:15.555477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:15.653644       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.182:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:15.653730       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.182:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:16.775479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.182:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:16.775534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.182:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:17.543703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.182:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:17.543773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.182:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:17.823633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.182:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:17.823705       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.182:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:18.388446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.182:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:18.388503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.182:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:18.644116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:18.644174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:18.779659       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:18.779722       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.182:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:18.872582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.182:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	E0520 12:49:18.872641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.182:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.182:8443: connect: connection refused
	W0520 12:49:21.790760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:49:21.790961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0520 12:49:41.260947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 12:51:11.512765       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-svgj6\": pod busybox-fc5497c4f-svgj6 is already assigned to node \"ha-252263-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-svgj6" node="ha-252263-m04"
	E0520 12:51:11.512888       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5e1c45d4-e922-4656-97ae-495208badbf3(default/busybox-fc5497c4f-svgj6) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-svgj6"
	E0520 12:51:11.512960       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-svgj6\": pod busybox-fc5497c4f-svgj6 is already assigned to node \"ha-252263-m04\"" pod="default/busybox-fc5497c4f-svgj6"
	I0520 12:51:11.512984       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-svgj6" node="ha-252263-m04"
	
	
	==> kubelet <==
	May 20 12:49:50 ha-252263 kubelet[1370]: I0520 12:49:50.928332    1370 scope.go:117] "RemoveContainer" containerID="f0fa157b9750dbaae674c734d89025157f8778420d5cef1a7872847fd3b63127"
	May 20 12:49:57 ha-252263 kubelet[1370]: I0520 12:49:57.928765    1370 scope.go:117] "RemoveContainer" containerID="ea90ef3e02cffba0ef036fc3cfe3601f23f8ebd8916f3965377c0f0a64bb9bdf"
	May 20 12:50:15 ha-252263 kubelet[1370]: I0520 12:50:15.928383    1370 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-252263" podUID="6e5827b4-5a1c-4523-9282-8c901ab68b5a"
	May 20 12:50:15 ha-252263 kubelet[1370]: I0520 12:50:15.948245    1370 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-252263"
	May 20 12:50:17 ha-252263 kubelet[1370]: I0520 12:50:17.953168    1370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-252263" podStartSLOduration=2.953143597 podStartE2EDuration="2.953143597s" podCreationTimestamp="2024-05-20 12:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 12:50:17.952768164 +0000 UTC m=+760.168678253" watchObservedRunningTime="2024-05-20 12:50:17.953143597 +0000 UTC m=+760.169053688"
	May 20 12:50:37 ha-252263 kubelet[1370]: E0520 12:50:37.952184    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:50:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:50:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:50:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:50:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:51:37 ha-252263 kubelet[1370]: E0520 12:51:37.954263    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:51:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:51:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:51:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:51:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:52:37 ha-252263 kubelet[1370]: E0520 12:52:37.945610    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:52:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:52:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:52:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:52:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:53:37 ha-252263 kubelet[1370]: E0520 12:53:37.946612    1370 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:53:37 ha-252263 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:53:37 ha-252263 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:53:37 ha-252263 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:53:37 ha-252263 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 12:53:48.142904  883600 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18932-852915/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-252263 -n ha-252263
helpers_test.go:261: (dbg) Run:  kubectl --context ha-252263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (305.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-865571
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-865571
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-865571: exit status 82 (2m1.932416488s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-865571-m03"  ...
	* Stopping node "multinode-865571-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-865571" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865571 --wait=true -v=8 --alsologtostderr
E0520 13:11:10.516305  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-865571 --wait=true -v=8 --alsologtostderr: (3m1.034111061s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-865571
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-865571 -n multinode-865571
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-865571 logs -n 25: (1.502658377s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m02:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile540683293/001/cp-test_multinode-865571-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m02:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571:/home/docker/cp-test_multinode-865571-m02_multinode-865571.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571 sudo cat                                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m02_multinode-865571.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m02:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03:/home/docker/cp-test_multinode-865571-m02_multinode-865571-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571-m03 sudo cat                                   | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m02_multinode-865571-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp testdata/cp-test.txt                                                | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile540683293/001/cp-test_multinode-865571-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571:/home/docker/cp-test_multinode-865571-m03_multinode-865571.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571 sudo cat                                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m03_multinode-865571.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02:/home/docker/cp-test_multinode-865571-m03_multinode-865571-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571-m02 sudo cat                                   | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m03_multinode-865571-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-865571 node stop m03                                                          | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	| node    | multinode-865571 node start                                                             | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-865571                                                                | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	| stop    | -p multinode-865571                                                                     | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	| start   | -p multinode-865571                                                                     | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:09 UTC | 20 May 24 13:12 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-865571                                                                | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:12 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:09:41
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:09:41.401627  892584 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:09:41.401744  892584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:09:41.401755  892584 out.go:304] Setting ErrFile to fd 2...
	I0520 13:09:41.401761  892584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:09:41.401972  892584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:09:41.402513  892584 out.go:298] Setting JSON to false
	I0520 13:09:41.403544  892584 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10329,"bootTime":1716200252,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:09:41.403600  892584 start.go:139] virtualization: kvm guest
	I0520 13:09:41.405890  892584 out.go:177] * [multinode-865571] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:09:41.407691  892584 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 13:09:41.409038  892584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:09:41.407689  892584 notify.go:220] Checking for updates...
	I0520 13:09:41.411265  892584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:09:41.412518  892584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:09:41.413734  892584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:09:41.414959  892584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:09:41.416536  892584 config.go:182] Loaded profile config "multinode-865571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:09:41.416680  892584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:09:41.417169  892584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:09:41.417224  892584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:09:41.438382  892584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0520 13:09:41.438825  892584 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:09:41.439419  892584 main.go:141] libmachine: Using API Version  1
	I0520 13:09:41.439440  892584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:09:41.439948  892584 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:09:41.440192  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:09:41.475614  892584 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:09:41.476872  892584 start.go:297] selected driver: kvm2
	I0520 13:09:41.476881  892584 start.go:901] validating driver "kvm2" against &{Name:multinode-865571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-865571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.160 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:09:41.477028  892584 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:09:41.477331  892584 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:09:41.477390  892584 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:09:41.492014  892584 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:09:41.492670  892584 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:09:41.492743  892584 cni.go:84] Creating CNI manager for ""
	I0520 13:09:41.492756  892584 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 13:09:41.492814  892584 start.go:340] cluster config:
	{Name:multinode-865571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-865571 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.160 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:09:41.492934  892584 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:09:41.494792  892584 out.go:177] * Starting "multinode-865571" primary control-plane node in "multinode-865571" cluster
	I0520 13:09:41.496090  892584 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:09:41.496118  892584 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:09:41.496128  892584 cache.go:56] Caching tarball of preloaded images
	I0520 13:09:41.496205  892584 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:09:41.496216  892584 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:09:41.496329  892584 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/config.json ...
	I0520 13:09:41.496504  892584 start.go:360] acquireMachinesLock for multinode-865571: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:09:41.496541  892584 start.go:364] duration metric: took 20.303µs to acquireMachinesLock for "multinode-865571"
	I0520 13:09:41.496553  892584 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:09:41.496561  892584 fix.go:54] fixHost starting: 
	I0520 13:09:41.496843  892584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:09:41.496877  892584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:09:41.510550  892584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0520 13:09:41.511048  892584 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:09:41.511523  892584 main.go:141] libmachine: Using API Version  1
	I0520 13:09:41.511545  892584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:09:41.511814  892584 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:09:41.512008  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:09:41.512136  892584 main.go:141] libmachine: (multinode-865571) Calling .GetState
	I0520 13:09:41.513744  892584 fix.go:112] recreateIfNeeded on multinode-865571: state=Running err=<nil>
	W0520 13:09:41.513764  892584 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:09:41.516559  892584 out.go:177] * Updating the running kvm2 "multinode-865571" VM ...
	I0520 13:09:41.518042  892584 machine.go:94] provisionDockerMachine start ...
	I0520 13:09:41.518066  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:09:41.518259  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.520775  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.521279  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.521309  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.521430  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:41.521593  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.521781  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.521955  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:41.522131  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:09:41.522339  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:09:41.522351  892584 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:09:41.640152  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-865571
	
	I0520 13:09:41.640182  892584 main.go:141] libmachine: (multinode-865571) Calling .GetMachineName
	I0520 13:09:41.640448  892584 buildroot.go:166] provisioning hostname "multinode-865571"
	I0520 13:09:41.640481  892584 main.go:141] libmachine: (multinode-865571) Calling .GetMachineName
	I0520 13:09:41.640673  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.643431  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.643791  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.643829  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.644010  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:41.644209  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.644384  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.644524  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:41.644680  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:09:41.644856  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:09:41.644869  892584 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-865571 && echo "multinode-865571" | sudo tee /etc/hostname
	I0520 13:09:41.775557  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-865571
	
	I0520 13:09:41.775580  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.778395  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.778785  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.778832  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.779056  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:41.779261  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.779441  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.779608  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:41.779775  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:09:41.779968  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:09:41.779984  892584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-865571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-865571/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-865571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:09:41.896561  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:09:41.896598  892584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 13:09:41.896658  892584 buildroot.go:174] setting up certificates
	I0520 13:09:41.896673  892584 provision.go:84] configureAuth start
	I0520 13:09:41.896694  892584 main.go:141] libmachine: (multinode-865571) Calling .GetMachineName
	I0520 13:09:41.897018  892584 main.go:141] libmachine: (multinode-865571) Calling .GetIP
	I0520 13:09:41.899498  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.899840  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.899861  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.900016  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.902091  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.902451  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.902483  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.902592  892584 provision.go:143] copyHostCerts
	I0520 13:09:41.902621  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 13:09:41.902668  892584 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 13:09:41.902688  892584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 13:09:41.902769  892584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 13:09:41.902910  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 13:09:41.902936  892584 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 13:09:41.902943  892584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 13:09:41.902984  892584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 13:09:41.903064  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 13:09:41.903087  892584 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 13:09:41.903094  892584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 13:09:41.903131  892584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 13:09:41.903212  892584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.multinode-865571 san=[127.0.0.1 192.168.39.78 localhost minikube multinode-865571]
	I0520 13:09:41.981621  892584 provision.go:177] copyRemoteCerts
	I0520 13:09:41.981693  892584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:09:41.981734  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.984360  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.984677  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.984707  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.984870  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:41.985079  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.985270  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:41.985401  892584 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:09:42.074874  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:09:42.074972  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 13:09:42.101153  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:09:42.101211  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 13:09:42.125689  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:09:42.125743  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:09:42.150595  892584 provision.go:87] duration metric: took 253.901402ms to configureAuth
	I0520 13:09:42.150630  892584 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:09:42.150912  892584 config.go:182] Loaded profile config "multinode-865571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:09:42.151005  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:42.153650  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:42.154008  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:42.154036  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:42.154167  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:42.154354  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:42.154484  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:42.154595  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:42.154704  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:09:42.154924  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:09:42.154945  892584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:11:12.916665  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:11:12.916756  892584 machine.go:97] duration metric: took 1m31.39863487s to provisionDockerMachine
	I0520 13:11:12.916784  892584 start.go:293] postStartSetup for "multinode-865571" (driver="kvm2")
	I0520 13:11:12.916802  892584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:11:12.916841  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:12.917239  892584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:11:12.917279  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:11:12.920514  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:12.921031  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:12.921062  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:12.921230  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:11:12.921427  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:12.921598  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:11:12.921744  892584 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:11:13.011195  892584 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:11:13.015534  892584 command_runner.go:130] > NAME=Buildroot
	I0520 13:11:13.015556  892584 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 13:11:13.015561  892584 command_runner.go:130] > ID=buildroot
	I0520 13:11:13.015566  892584 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 13:11:13.015571  892584 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 13:11:13.015638  892584 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:11:13.015664  892584 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 13:11:13.015744  892584 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 13:11:13.015819  892584 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 13:11:13.015830  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 13:11:13.015906  892584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:11:13.025559  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:11:13.048846  892584 start.go:296] duration metric: took 132.045747ms for postStartSetup
	I0520 13:11:13.048885  892584 fix.go:56] duration metric: took 1m31.552324117s for fixHost
	I0520 13:11:13.048908  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:11:13.051506  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.051855  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:13.051880  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.052108  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:11:13.052325  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:13.052477  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:13.052610  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:11:13.052809  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:11:13.053009  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:11:13.053021  892584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:11:13.167683  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716210673.149564513
	
	I0520 13:11:13.167708  892584 fix.go:216] guest clock: 1716210673.149564513
	I0520 13:11:13.167715  892584 fix.go:229] Guest: 2024-05-20 13:11:13.149564513 +0000 UTC Remote: 2024-05-20 13:11:13.048889216 +0000 UTC m=+91.683191693 (delta=100.675297ms)
	I0520 13:11:13.167736  892584 fix.go:200] guest clock delta is within tolerance: 100.675297ms
	I0520 13:11:13.167742  892584 start.go:83] releasing machines lock for "multinode-865571", held for 1m31.671192938s
	I0520 13:11:13.167762  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:13.168046  892584 main.go:141] libmachine: (multinode-865571) Calling .GetIP
	I0520 13:11:13.170614  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.170974  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:13.171017  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.171204  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:13.171689  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:13.171886  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:13.172006  892584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:11:13.172046  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:11:13.172149  892584 ssh_runner.go:195] Run: cat /version.json
	I0520 13:11:13.172177  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:11:13.174409  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.174622  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.174769  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:13.174800  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.174930  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:11:13.175053  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:13.175079  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.175125  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:13.175239  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:11:13.175306  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:11:13.175390  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:13.175406  892584 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:11:13.175510  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:11:13.175642  892584 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:11:13.255378  892584 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	W0520 13:11:13.255551  892584 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:11:13.255633  892584 ssh_runner.go:195] Run: systemctl --version
	I0520 13:11:13.279927  892584 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 13:11:13.279988  892584 command_runner.go:130] > systemd 252 (252)
	I0520 13:11:13.280021  892584 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 13:11:13.280159  892584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:11:13.436928  892584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 13:11:13.443022  892584 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 13:11:13.443188  892584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:11:13.443256  892584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:11:13.452395  892584 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 13:11:13.452413  892584 start.go:494] detecting cgroup driver to use...
	I0520 13:11:13.452471  892584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:11:13.468182  892584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:11:13.481794  892584 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:11:13.481852  892584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:11:13.494673  892584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:11:13.507653  892584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:11:13.644228  892584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:11:13.780453  892584 docker.go:233] disabling docker service ...
	I0520 13:11:13.780534  892584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:11:13.797929  892584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:11:13.813275  892584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:11:13.951120  892584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:11:14.092338  892584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:11:14.107709  892584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:11:14.126780  892584 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0520 13:11:14.126858  892584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:11:14.126921  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.137886  892584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:11:14.137959  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.148728  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.159271  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.170000  892584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:11:14.181237  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.192155  892584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.202813  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.214831  892584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:11:14.227151  892584 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 13:11:14.227220  892584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:11:14.236659  892584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:11:14.369098  892584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:11:20.897191  892584 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.528050351s)
	I0520 13:11:20.897222  892584 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:11:20.897272  892584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:11:20.902136  892584 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0520 13:11:20.902165  892584 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 13:11:20.902174  892584 command_runner.go:130] > Device: 0,22	Inode: 1325        Links: 1
	I0520 13:11:20.902180  892584 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 13:11:20.902186  892584 command_runner.go:130] > Access: 2024-05-20 13:11:20.775317375 +0000
	I0520 13:11:20.902192  892584 command_runner.go:130] > Modify: 2024-05-20 13:11:20.775317375 +0000
	I0520 13:11:20.902197  892584 command_runner.go:130] > Change: 2024-05-20 13:11:20.775317375 +0000
	I0520 13:11:20.902201  892584 command_runner.go:130] >  Birth: -
	I0520 13:11:20.902239  892584 start.go:562] Will wait 60s for crictl version
	I0520 13:11:20.902277  892584 ssh_runner.go:195] Run: which crictl
	I0520 13:11:20.905990  892584 command_runner.go:130] > /usr/bin/crictl
	I0520 13:11:20.906055  892584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:11:20.941487  892584 command_runner.go:130] > Version:  0.1.0
	I0520 13:11:20.941507  892584 command_runner.go:130] > RuntimeName:  cri-o
	I0520 13:11:20.941511  892584 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0520 13:11:20.941516  892584 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 13:11:20.942569  892584 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:11:20.942633  892584 ssh_runner.go:195] Run: crio --version
	I0520 13:11:20.970140  892584 command_runner.go:130] > crio version 1.29.1
	I0520 13:11:20.970164  892584 command_runner.go:130] > Version:        1.29.1
	I0520 13:11:20.970169  892584 command_runner.go:130] > GitCommit:      unknown
	I0520 13:11:20.970174  892584 command_runner.go:130] > GitCommitDate:  unknown
	I0520 13:11:20.970178  892584 command_runner.go:130] > GitTreeState:   clean
	I0520 13:11:20.970184  892584 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 13:11:20.970189  892584 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 13:11:20.970192  892584 command_runner.go:130] > Compiler:       gc
	I0520 13:11:20.970197  892584 command_runner.go:130] > Platform:       linux/amd64
	I0520 13:11:20.970207  892584 command_runner.go:130] > Linkmode:       dynamic
	I0520 13:11:20.970213  892584 command_runner.go:130] > BuildTags:      
	I0520 13:11:20.970220  892584 command_runner.go:130] >   containers_image_ostree_stub
	I0520 13:11:20.970231  892584 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 13:11:20.970238  892584 command_runner.go:130] >   btrfs_noversion
	I0520 13:11:20.970248  892584 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 13:11:20.970258  892584 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 13:11:20.970265  892584 command_runner.go:130] >   seccomp
	I0520 13:11:20.970269  892584 command_runner.go:130] > LDFlags:          unknown
	I0520 13:11:20.970276  892584 command_runner.go:130] > SeccompEnabled:   true
	I0520 13:11:20.970280  892584 command_runner.go:130] > AppArmorEnabled:  false
	I0520 13:11:20.971407  892584 ssh_runner.go:195] Run: crio --version
	I0520 13:11:20.999282  892584 command_runner.go:130] > crio version 1.29.1
	I0520 13:11:20.999310  892584 command_runner.go:130] > Version:        1.29.1
	I0520 13:11:20.999319  892584 command_runner.go:130] > GitCommit:      unknown
	I0520 13:11:20.999325  892584 command_runner.go:130] > GitCommitDate:  unknown
	I0520 13:11:20.999331  892584 command_runner.go:130] > GitTreeState:   clean
	I0520 13:11:20.999338  892584 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 13:11:20.999344  892584 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 13:11:20.999350  892584 command_runner.go:130] > Compiler:       gc
	I0520 13:11:20.999358  892584 command_runner.go:130] > Platform:       linux/amd64
	I0520 13:11:20.999365  892584 command_runner.go:130] > Linkmode:       dynamic
	I0520 13:11:20.999374  892584 command_runner.go:130] > BuildTags:      
	I0520 13:11:20.999385  892584 command_runner.go:130] >   containers_image_ostree_stub
	I0520 13:11:20.999393  892584 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 13:11:20.999402  892584 command_runner.go:130] >   btrfs_noversion
	I0520 13:11:20.999410  892584 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 13:11:20.999421  892584 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 13:11:20.999431  892584 command_runner.go:130] >   seccomp
	I0520 13:11:20.999439  892584 command_runner.go:130] > LDFlags:          unknown
	I0520 13:11:20.999458  892584 command_runner.go:130] > SeccompEnabled:   true
	I0520 13:11:20.999468  892584 command_runner.go:130] > AppArmorEnabled:  false
	I0520 13:11:21.002386  892584 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:11:21.003758  892584 main.go:141] libmachine: (multinode-865571) Calling .GetIP
	I0520 13:11:21.006648  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:21.007087  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:21.007119  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:21.007332  892584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:11:21.011630  892584 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0520 13:11:21.011730  892584 kubeadm.go:877] updating cluster {Name:multinode-865571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-865571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.160 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:11:21.011874  892584 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:11:21.011919  892584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:11:21.059854  892584 command_runner.go:130] > {
	I0520 13:11:21.059877  892584 command_runner.go:130] >   "images": [
	I0520 13:11:21.059881  892584 command_runner.go:130] >     {
	I0520 13:11:21.059890  892584 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 13:11:21.059894  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.059901  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 13:11:21.059904  892584 command_runner.go:130] >       ],
	I0520 13:11:21.059914  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.059927  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 13:11:21.059939  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 13:11:21.059949  892584 command_runner.go:130] >       ],
	I0520 13:11:21.059956  892584 command_runner.go:130] >       "size": "65291810",
	I0520 13:11:21.059962  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.059970  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.059984  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.059991  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.059995  892584 command_runner.go:130] >     },
	I0520 13:11:21.059999  892584 command_runner.go:130] >     {
	I0520 13:11:21.060005  892584 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 13:11:21.060009  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060014  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 13:11:21.060018  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060023  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060035  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 13:11:21.060050  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 13:11:21.060057  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060067  892584 command_runner.go:130] >       "size": "1363676",
	I0520 13:11:21.060074  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.060088  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060095  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060099  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060102  892584 command_runner.go:130] >     },
	I0520 13:11:21.060106  892584 command_runner.go:130] >     {
	I0520 13:11:21.060111  892584 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 13:11:21.060116  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060121  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 13:11:21.060126  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060131  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060144  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 13:11:21.060160  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 13:11:21.060169  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060176  892584 command_runner.go:130] >       "size": "31470524",
	I0520 13:11:21.060186  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.060192  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060199  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060203  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060209  892584 command_runner.go:130] >     },
	I0520 13:11:21.060212  892584 command_runner.go:130] >     {
	I0520 13:11:21.060218  892584 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 13:11:21.060224  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060229  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 13:11:21.060235  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060242  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060258  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 13:11:21.060280  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 13:11:21.060290  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060296  892584 command_runner.go:130] >       "size": "61245718",
	I0520 13:11:21.060302  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.060309  892584 command_runner.go:130] >       "username": "nonroot",
	I0520 13:11:21.060315  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060319  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060323  892584 command_runner.go:130] >     },
	I0520 13:11:21.060326  892584 command_runner.go:130] >     {
	I0520 13:11:21.060335  892584 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 13:11:21.060355  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060367  892584 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 13:11:21.060376  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060382  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060396  892584 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 13:11:21.060410  892584 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 13:11:21.060417  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060421  892584 command_runner.go:130] >       "size": "150779692",
	I0520 13:11:21.060428  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.060434  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.060443  892584 command_runner.go:130] >       },
	I0520 13:11:21.060449  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060459  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060474  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060479  892584 command_runner.go:130] >     },
	I0520 13:11:21.060487  892584 command_runner.go:130] >     {
	I0520 13:11:21.060497  892584 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 13:11:21.060506  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060513  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 13:11:21.060522  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060529  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060548  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 13:11:21.060563  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 13:11:21.060572  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060582  892584 command_runner.go:130] >       "size": "117601759",
	I0520 13:11:21.060593  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.060600  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.060606  892584 command_runner.go:130] >       },
	I0520 13:11:21.060610  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060618  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060626  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060633  892584 command_runner.go:130] >     },
	I0520 13:11:21.060641  892584 command_runner.go:130] >     {
	I0520 13:11:21.060651  892584 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 13:11:21.060660  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060671  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 13:11:21.060687  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060694  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060726  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 13:11:21.060743  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 13:11:21.060750  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060759  892584 command_runner.go:130] >       "size": "112170310",
	I0520 13:11:21.060765  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.060773  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.060780  892584 command_runner.go:130] >       },
	I0520 13:11:21.060787  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060797  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060804  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060812  892584 command_runner.go:130] >     },
	I0520 13:11:21.060815  892584 command_runner.go:130] >     {
	I0520 13:11:21.060824  892584 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 13:11:21.060834  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060846  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 13:11:21.060851  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060861  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060888  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 13:11:21.060902  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 13:11:21.060908  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060913  892584 command_runner.go:130] >       "size": "85933465",
	I0520 13:11:21.060917  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.060920  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060926  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060932  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060938  892584 command_runner.go:130] >     },
	I0520 13:11:21.060944  892584 command_runner.go:130] >     {
	I0520 13:11:21.060952  892584 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 13:11:21.060959  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060967  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 13:11:21.060972  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060978  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060993  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 13:11:21.061002  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 13:11:21.061012  892584 command_runner.go:130] >       ],
	I0520 13:11:21.061019  892584 command_runner.go:130] >       "size": "63026504",
	I0520 13:11:21.061026  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.061036  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.061042  892584 command_runner.go:130] >       },
	I0520 13:11:21.061051  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.061057  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.061066  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.061072  892584 command_runner.go:130] >     },
	I0520 13:11:21.061080  892584 command_runner.go:130] >     {
	I0520 13:11:21.061087  892584 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 13:11:21.061093  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.061101  892584 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 13:11:21.061110  892584 command_runner.go:130] >       ],
	I0520 13:11:21.061117  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.061130  892584 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 13:11:21.061144  892584 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 13:11:21.061152  892584 command_runner.go:130] >       ],
	I0520 13:11:21.061159  892584 command_runner.go:130] >       "size": "750414",
	I0520 13:11:21.061169  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.061175  892584 command_runner.go:130] >         "value": "65535"
	I0520 13:11:21.061180  892584 command_runner.go:130] >       },
	I0520 13:11:21.061185  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.061192  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.061199  892584 command_runner.go:130] >       "pinned": true
	I0520 13:11:21.061207  892584 command_runner.go:130] >     }
	I0520 13:11:21.061212  892584 command_runner.go:130] >   ]
	I0520 13:11:21.061217  892584 command_runner.go:130] > }
	I0520 13:11:21.061435  892584 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:11:21.061448  892584 crio.go:433] Images already preloaded, skipping extraction
	I0520 13:11:21.061507  892584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:11:21.093098  892584 command_runner.go:130] > {
	I0520 13:11:21.093128  892584 command_runner.go:130] >   "images": [
	I0520 13:11:21.093135  892584 command_runner.go:130] >     {
	I0520 13:11:21.093143  892584 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 13:11:21.093157  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093166  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 13:11:21.093174  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093180  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093195  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 13:11:21.093210  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 13:11:21.093215  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093223  892584 command_runner.go:130] >       "size": "65291810",
	I0520 13:11:21.093227  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.093231  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093248  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093255  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093258  892584 command_runner.go:130] >     },
	I0520 13:11:21.093262  892584 command_runner.go:130] >     {
	I0520 13:11:21.093268  892584 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 13:11:21.093276  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093288  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 13:11:21.093296  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093303  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093317  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 13:11:21.093328  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 13:11:21.093335  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093339  892584 command_runner.go:130] >       "size": "1363676",
	I0520 13:11:21.093345  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.093352  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093358  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093362  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093367  892584 command_runner.go:130] >     },
	I0520 13:11:21.093371  892584 command_runner.go:130] >     {
	I0520 13:11:21.093382  892584 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 13:11:21.093393  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093402  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 13:11:21.093414  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093423  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093434  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 13:11:21.093444  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 13:11:21.093449  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093454  892584 command_runner.go:130] >       "size": "31470524",
	I0520 13:11:21.093460  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.093464  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093472  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093483  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093491  892584 command_runner.go:130] >     },
	I0520 13:11:21.093497  892584 command_runner.go:130] >     {
	I0520 13:11:21.093511  892584 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 13:11:21.093520  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093532  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 13:11:21.093540  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093547  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093557  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 13:11:21.093569  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 13:11:21.093578  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093588  892584 command_runner.go:130] >       "size": "61245718",
	I0520 13:11:21.093595  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.093605  892584 command_runner.go:130] >       "username": "nonroot",
	I0520 13:11:21.093618  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093627  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093636  892584 command_runner.go:130] >     },
	I0520 13:11:21.093644  892584 command_runner.go:130] >     {
	I0520 13:11:21.093656  892584 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 13:11:21.093663  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093671  892584 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 13:11:21.093680  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093690  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093711  892584 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 13:11:21.093725  892584 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 13:11:21.093734  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093742  892584 command_runner.go:130] >       "size": "150779692",
	I0520 13:11:21.093748  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.093758  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.093766  892584 command_runner.go:130] >       },
	I0520 13:11:21.093774  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093784  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093792  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093801  892584 command_runner.go:130] >     },
	I0520 13:11:21.093808  892584 command_runner.go:130] >     {
	I0520 13:11:21.093821  892584 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 13:11:21.093830  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093838  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 13:11:21.093844  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093853  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093871  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 13:11:21.093886  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 13:11:21.093894  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093904  892584 command_runner.go:130] >       "size": "117601759",
	I0520 13:11:21.093913  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.093923  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.093930  892584 command_runner.go:130] >       },
	I0520 13:11:21.093934  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093940  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093947  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093955  892584 command_runner.go:130] >     },
	I0520 13:11:21.093964  892584 command_runner.go:130] >     {
	I0520 13:11:21.093977  892584 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 13:11:21.093987  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093999  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 13:11:21.094008  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094017  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.094049  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 13:11:21.094066  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 13:11:21.094085  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094095  892584 command_runner.go:130] >       "size": "112170310",
	I0520 13:11:21.094104  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.094113  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.094123  892584 command_runner.go:130] >       },
	I0520 13:11:21.094131  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.094135  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.094143  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.094152  892584 command_runner.go:130] >     },
	I0520 13:11:21.094158  892584 command_runner.go:130] >     {
	I0520 13:11:21.094171  892584 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 13:11:21.094180  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.094188  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 13:11:21.094196  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094203  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.094223  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 13:11:21.094238  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 13:11:21.094247  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094254  892584 command_runner.go:130] >       "size": "85933465",
	I0520 13:11:21.094264  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.094271  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.094281  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.094287  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.094296  892584 command_runner.go:130] >     },
	I0520 13:11:21.094302  892584 command_runner.go:130] >     {
	I0520 13:11:21.094314  892584 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 13:11:21.094320  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.094326  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 13:11:21.094334  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094342  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.094357  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 13:11:21.094371  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 13:11:21.094379  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094385  892584 command_runner.go:130] >       "size": "63026504",
	I0520 13:11:21.094395  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.094402  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.094410  892584 command_runner.go:130] >       },
	I0520 13:11:21.094416  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.094426  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.094434  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.094444  892584 command_runner.go:130] >     },
	I0520 13:11:21.094452  892584 command_runner.go:130] >     {
	I0520 13:11:21.094462  892584 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 13:11:21.094471  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.094481  892584 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 13:11:21.094490  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094497  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.094506  892584 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 13:11:21.094525  892584 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 13:11:21.094537  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094544  892584 command_runner.go:130] >       "size": "750414",
	I0520 13:11:21.094554  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.094563  892584 command_runner.go:130] >         "value": "65535"
	I0520 13:11:21.094572  892584 command_runner.go:130] >       },
	I0520 13:11:21.094582  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.094590  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.094599  892584 command_runner.go:130] >       "pinned": true
	I0520 13:11:21.094606  892584 command_runner.go:130] >     }
	I0520 13:11:21.094609  892584 command_runner.go:130] >   ]
	I0520 13:11:21.094613  892584 command_runner.go:130] > }
	I0520 13:11:21.094792  892584 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:11:21.094807  892584 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:11:21.094817  892584 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.30.1 crio true true} ...
	I0520 13:11:21.094976  892584 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-865571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-865571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:11:21.095060  892584 ssh_runner.go:195] Run: crio config
	I0520 13:11:21.140917  892584 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0520 13:11:21.140949  892584 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0520 13:11:21.140959  892584 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0520 13:11:21.140963  892584 command_runner.go:130] > #
	I0520 13:11:21.141002  892584 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0520 13:11:21.141015  892584 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0520 13:11:21.141021  892584 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0520 13:11:21.141032  892584 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0520 13:11:21.141041  892584 command_runner.go:130] > # reload'.
	I0520 13:11:21.141050  892584 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0520 13:11:21.141061  892584 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0520 13:11:21.141073  892584 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0520 13:11:21.141086  892584 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0520 13:11:21.141092  892584 command_runner.go:130] > [crio]
	I0520 13:11:21.141102  892584 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0520 13:11:21.141113  892584 command_runner.go:130] > # containers images, in this directory.
	I0520 13:11:21.141124  892584 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0520 13:11:21.141137  892584 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0520 13:11:21.141155  892584 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0520 13:11:21.141176  892584 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0520 13:11:21.141187  892584 command_runner.go:130] > # imagestore = ""
	I0520 13:11:21.141196  892584 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0520 13:11:21.141206  892584 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0520 13:11:21.141217  892584 command_runner.go:130] > storage_driver = "overlay"
	I0520 13:11:21.141230  892584 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0520 13:11:21.141242  892584 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0520 13:11:21.141250  892584 command_runner.go:130] > storage_option = [
	I0520 13:11:21.141259  892584 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0520 13:11:21.141267  892584 command_runner.go:130] > ]
	I0520 13:11:21.141281  892584 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0520 13:11:21.141292  892584 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0520 13:11:21.141302  892584 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0520 13:11:21.141312  892584 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0520 13:11:21.141325  892584 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0520 13:11:21.141335  892584 command_runner.go:130] > # always happen on a node reboot
	I0520 13:11:21.141346  892584 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0520 13:11:21.141368  892584 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0520 13:11:21.141380  892584 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0520 13:11:21.141390  892584 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0520 13:11:21.141397  892584 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0520 13:11:21.141410  892584 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0520 13:11:21.141426  892584 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0520 13:11:21.141436  892584 command_runner.go:130] > # internal_wipe = true
	I0520 13:11:21.141448  892584 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0520 13:11:21.141460  892584 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0520 13:11:21.141469  892584 command_runner.go:130] > # internal_repair = false
	I0520 13:11:21.141484  892584 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0520 13:11:21.141497  892584 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0520 13:11:21.141506  892584 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0520 13:11:21.141517  892584 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0520 13:11:21.141527  892584 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0520 13:11:21.141535  892584 command_runner.go:130] > [crio.api]
	I0520 13:11:21.141544  892584 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0520 13:11:21.141559  892584 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0520 13:11:21.141580  892584 command_runner.go:130] > # IP address on which the stream server will listen.
	I0520 13:11:21.141590  892584 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0520 13:11:21.141601  892584 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0520 13:11:21.141611  892584 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0520 13:11:21.141619  892584 command_runner.go:130] > # stream_port = "0"
	I0520 13:11:21.141627  892584 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0520 13:11:21.141634  892584 command_runner.go:130] > # stream_enable_tls = false
	I0520 13:11:21.141642  892584 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0520 13:11:21.141653  892584 command_runner.go:130] > # stream_idle_timeout = ""
	I0520 13:11:21.141667  892584 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0520 13:11:21.141695  892584 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0520 13:11:21.141706  892584 command_runner.go:130] > # minutes.
	I0520 13:11:21.141712  892584 command_runner.go:130] > # stream_tls_cert = ""
	I0520 13:11:21.141737  892584 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0520 13:11:21.141751  892584 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0520 13:11:21.141760  892584 command_runner.go:130] > # stream_tls_key = ""
	I0520 13:11:21.141769  892584 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0520 13:11:21.141781  892584 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0520 13:11:21.141806  892584 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0520 13:11:21.141815  892584 command_runner.go:130] > # stream_tls_ca = ""
	I0520 13:11:21.141828  892584 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 13:11:21.141839  892584 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0520 13:11:21.141859  892584 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 13:11:21.141870  892584 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0520 13:11:21.141882  892584 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0520 13:11:21.141891  892584 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0520 13:11:21.141898  892584 command_runner.go:130] > [crio.runtime]
	I0520 13:11:21.141907  892584 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0520 13:11:21.141920  892584 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0520 13:11:21.141929  892584 command_runner.go:130] > # "nofile=1024:2048"
	I0520 13:11:21.141942  892584 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0520 13:11:21.141951  892584 command_runner.go:130] > # default_ulimits = [
	I0520 13:11:21.141957  892584 command_runner.go:130] > # ]
	I0520 13:11:21.141970  892584 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0520 13:11:21.141975  892584 command_runner.go:130] > # no_pivot = false
	I0520 13:11:21.141984  892584 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0520 13:11:21.141996  892584 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0520 13:11:21.142006  892584 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0520 13:11:21.142021  892584 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0520 13:11:21.142032  892584 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0520 13:11:21.142045  892584 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 13:11:21.142055  892584 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0520 13:11:21.142062  892584 command_runner.go:130] > # Cgroup setting for conmon
	I0520 13:11:21.142077  892584 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0520 13:11:21.142084  892584 command_runner.go:130] > conmon_cgroup = "pod"
	I0520 13:11:21.142094  892584 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0520 13:11:21.142105  892584 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0520 13:11:21.142118  892584 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 13:11:21.142127  892584 command_runner.go:130] > conmon_env = [
	I0520 13:11:21.142136  892584 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 13:11:21.142141  892584 command_runner.go:130] > ]
	I0520 13:11:21.142146  892584 command_runner.go:130] > # Additional environment variables to set for all the
	I0520 13:11:21.142153  892584 command_runner.go:130] > # containers. These are overridden if set in the
	I0520 13:11:21.142158  892584 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0520 13:11:21.142164  892584 command_runner.go:130] > # default_env = [
	I0520 13:11:21.142167  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142173  892584 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0520 13:11:21.142180  892584 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0520 13:11:21.142187  892584 command_runner.go:130] > # selinux = false
	I0520 13:11:21.142196  892584 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0520 13:11:21.142204  892584 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0520 13:11:21.142217  892584 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0520 13:11:21.142227  892584 command_runner.go:130] > # seccomp_profile = ""
	I0520 13:11:21.142239  892584 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0520 13:11:21.142250  892584 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0520 13:11:21.142262  892584 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0520 13:11:21.142272  892584 command_runner.go:130] > # which might increase security.
	I0520 13:11:21.142279  892584 command_runner.go:130] > # This option is currently deprecated,
	I0520 13:11:21.142289  892584 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0520 13:11:21.142296  892584 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0520 13:11:21.142308  892584 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0520 13:11:21.142322  892584 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0520 13:11:21.142335  892584 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0520 13:11:21.142348  892584 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0520 13:11:21.142359  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.142375  892584 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0520 13:11:21.142383  892584 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0520 13:11:21.142388  892584 command_runner.go:130] > # the cgroup blockio controller.
	I0520 13:11:21.142393  892584 command_runner.go:130] > # blockio_config_file = ""
	I0520 13:11:21.142401  892584 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0520 13:11:21.142410  892584 command_runner.go:130] > # blockio parameters.
	I0520 13:11:21.142416  892584 command_runner.go:130] > # blockio_reload = false
	I0520 13:11:21.142428  892584 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0520 13:11:21.142437  892584 command_runner.go:130] > # irqbalance daemon.
	I0520 13:11:21.142445  892584 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0520 13:11:21.142459  892584 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0520 13:11:21.142473  892584 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0520 13:11:21.142483  892584 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0520 13:11:21.142496  892584 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0520 13:11:21.142507  892584 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0520 13:11:21.142518  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.142528  892584 command_runner.go:130] > # rdt_config_file = ""
	I0520 13:11:21.142539  892584 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0520 13:11:21.142549  892584 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0520 13:11:21.142568  892584 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0520 13:11:21.142575  892584 command_runner.go:130] > # separate_pull_cgroup = ""
	I0520 13:11:21.142580  892584 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0520 13:11:21.142586  892584 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0520 13:11:21.142592  892584 command_runner.go:130] > # will be added.
	I0520 13:11:21.142596  892584 command_runner.go:130] > # default_capabilities = [
	I0520 13:11:21.142599  892584 command_runner.go:130] > # 	"CHOWN",
	I0520 13:11:21.142603  892584 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0520 13:11:21.142607  892584 command_runner.go:130] > # 	"FSETID",
	I0520 13:11:21.142611  892584 command_runner.go:130] > # 	"FOWNER",
	I0520 13:11:21.142615  892584 command_runner.go:130] > # 	"SETGID",
	I0520 13:11:21.142618  892584 command_runner.go:130] > # 	"SETUID",
	I0520 13:11:21.142622  892584 command_runner.go:130] > # 	"SETPCAP",
	I0520 13:11:21.142626  892584 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0520 13:11:21.142629  892584 command_runner.go:130] > # 	"KILL",
	I0520 13:11:21.142632  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142639  892584 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0520 13:11:21.142647  892584 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0520 13:11:21.142652  892584 command_runner.go:130] > # add_inheritable_capabilities = false
	I0520 13:11:21.142660  892584 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0520 13:11:21.142669  892584 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 13:11:21.142678  892584 command_runner.go:130] > default_sysctls = [
	I0520 13:11:21.142690  892584 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0520 13:11:21.142698  892584 command_runner.go:130] > ]
	I0520 13:11:21.142705  892584 command_runner.go:130] > # List of devices on the host that a
	I0520 13:11:21.142721  892584 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0520 13:11:21.142730  892584 command_runner.go:130] > # allowed_devices = [
	I0520 13:11:21.142736  892584 command_runner.go:130] > # 	"/dev/fuse",
	I0520 13:11:21.142745  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142753  892584 command_runner.go:130] > # List of additional devices. specified as
	I0520 13:11:21.142767  892584 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0520 13:11:21.142778  892584 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0520 13:11:21.142786  892584 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 13:11:21.142796  892584 command_runner.go:130] > # additional_devices = [
	I0520 13:11:21.142802  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142813  892584 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0520 13:11:21.142823  892584 command_runner.go:130] > # cdi_spec_dirs = [
	I0520 13:11:21.142830  892584 command_runner.go:130] > # 	"/etc/cdi",
	I0520 13:11:21.142838  892584 command_runner.go:130] > # 	"/var/run/cdi",
	I0520 13:11:21.142856  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142869  892584 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0520 13:11:21.142882  892584 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0520 13:11:21.142892  892584 command_runner.go:130] > # Defaults to false.
	I0520 13:11:21.142901  892584 command_runner.go:130] > # device_ownership_from_security_context = false
	I0520 13:11:21.142914  892584 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0520 13:11:21.142927  892584 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0520 13:11:21.142936  892584 command_runner.go:130] > # hooks_dir = [
	I0520 13:11:21.142944  892584 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0520 13:11:21.142953  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142963  892584 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0520 13:11:21.142976  892584 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0520 13:11:21.142987  892584 command_runner.go:130] > # its default mounts from the following two files:
	I0520 13:11:21.142993  892584 command_runner.go:130] > #
	I0520 13:11:21.143005  892584 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0520 13:11:21.143018  892584 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0520 13:11:21.143029  892584 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0520 13:11:21.143037  892584 command_runner.go:130] > #
	I0520 13:11:21.143043  892584 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0520 13:11:21.143055  892584 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0520 13:11:21.143068  892584 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0520 13:11:21.143079  892584 command_runner.go:130] > #      only add mounts it finds in this file.
	I0520 13:11:21.143086  892584 command_runner.go:130] > #
	I0520 13:11:21.143094  892584 command_runner.go:130] > # default_mounts_file = ""
	I0520 13:11:21.143102  892584 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0520 13:11:21.143122  892584 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0520 13:11:21.143132  892584 command_runner.go:130] > pids_limit = 1024
	I0520 13:11:21.143141  892584 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0520 13:11:21.143154  892584 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0520 13:11:21.143165  892584 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0520 13:11:21.143182  892584 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0520 13:11:21.143194  892584 command_runner.go:130] > # log_size_max = -1
	I0520 13:11:21.143208  892584 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0520 13:11:21.143220  892584 command_runner.go:130] > # log_to_journald = false
	I0520 13:11:21.143233  892584 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0520 13:11:21.143244  892584 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0520 13:11:21.143256  892584 command_runner.go:130] > # Path to directory for container attach sockets.
	I0520 13:11:21.143266  892584 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0520 13:11:21.143277  892584 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0520 13:11:21.143286  892584 command_runner.go:130] > # bind_mount_prefix = ""
	I0520 13:11:21.143295  892584 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0520 13:11:21.143304  892584 command_runner.go:130] > # read_only = false
	I0520 13:11:21.143313  892584 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0520 13:11:21.143327  892584 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0520 13:11:21.143337  892584 command_runner.go:130] > # live configuration reload.
	I0520 13:11:21.143343  892584 command_runner.go:130] > # log_level = "info"
	I0520 13:11:21.143355  892584 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0520 13:11:21.143366  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.143372  892584 command_runner.go:130] > # log_filter = ""
	I0520 13:11:21.143379  892584 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0520 13:11:21.143388  892584 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0520 13:11:21.143391  892584 command_runner.go:130] > # separated by comma.
	I0520 13:11:21.143398  892584 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:11:21.143404  892584 command_runner.go:130] > # uid_mappings = ""
	I0520 13:11:21.143410  892584 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0520 13:11:21.143417  892584 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0520 13:11:21.143421  892584 command_runner.go:130] > # separated by comma.
	I0520 13:11:21.143430  892584 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:11:21.143436  892584 command_runner.go:130] > # gid_mappings = ""
	I0520 13:11:21.143447  892584 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0520 13:11:21.143459  892584 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 13:11:21.143475  892584 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 13:11:21.143491  892584 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:11:21.143501  892584 command_runner.go:130] > # minimum_mappable_uid = -1
	I0520 13:11:21.143518  892584 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0520 13:11:21.143531  892584 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 13:11:21.143543  892584 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 13:11:21.143558  892584 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:11:21.143568  892584 command_runner.go:130] > # minimum_mappable_gid = -1
	I0520 13:11:21.143581  892584 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0520 13:11:21.143594  892584 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0520 13:11:21.143606  892584 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0520 13:11:21.143616  892584 command_runner.go:130] > # ctr_stop_timeout = 30
	I0520 13:11:21.143627  892584 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0520 13:11:21.143638  892584 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0520 13:11:21.143646  892584 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0520 13:11:21.143652  892584 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0520 13:11:21.143661  892584 command_runner.go:130] > drop_infra_ctr = false
	I0520 13:11:21.143671  892584 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0520 13:11:21.143684  892584 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0520 13:11:21.143698  892584 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0520 13:11:21.143708  892584 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0520 13:11:21.143725  892584 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0520 13:11:21.143737  892584 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0520 13:11:21.143750  892584 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0520 13:11:21.143759  892584 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0520 13:11:21.143766  892584 command_runner.go:130] > # shared_cpuset = ""
	I0520 13:11:21.143778  892584 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0520 13:11:21.143789  892584 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0520 13:11:21.143799  892584 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0520 13:11:21.143809  892584 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0520 13:11:21.143819  892584 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0520 13:11:21.143825  892584 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0520 13:11:21.143833  892584 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0520 13:11:21.143837  892584 command_runner.go:130] > # enable_criu_support = false
	I0520 13:11:21.143847  892584 command_runner.go:130] > # Enable/disable the generation of the container,
	I0520 13:11:21.143860  892584 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0520 13:11:21.143871  892584 command_runner.go:130] > # enable_pod_events = false
	I0520 13:11:21.143883  892584 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 13:11:21.143896  892584 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 13:11:21.143910  892584 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0520 13:11:21.143919  892584 command_runner.go:130] > # default_runtime = "runc"
	I0520 13:11:21.143927  892584 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0520 13:11:21.143942  892584 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0520 13:11:21.143958  892584 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0520 13:11:21.143972  892584 command_runner.go:130] > # creation as a file is not desired either.
	I0520 13:11:21.143988  892584 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0520 13:11:21.143995  892584 command_runner.go:130] > # the hostname is being managed dynamically.
	I0520 13:11:21.144000  892584 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0520 13:11:21.144004  892584 command_runner.go:130] > # ]
	I0520 13:11:21.144010  892584 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0520 13:11:21.144017  892584 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0520 13:11:21.144023  892584 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0520 13:11:21.144030  892584 command_runner.go:130] > # Each entry in the table should follow the format:
	I0520 13:11:21.144034  892584 command_runner.go:130] > #
	I0520 13:11:21.144040  892584 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0520 13:11:21.144046  892584 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0520 13:11:21.144067  892584 command_runner.go:130] > # runtime_type = "oci"
	I0520 13:11:21.144074  892584 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0520 13:11:21.144079  892584 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0520 13:11:21.144084  892584 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0520 13:11:21.144089  892584 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0520 13:11:21.144095  892584 command_runner.go:130] > # monitor_env = []
	I0520 13:11:21.144099  892584 command_runner.go:130] > # privileged_without_host_devices = false
	I0520 13:11:21.144105  892584 command_runner.go:130] > # allowed_annotations = []
	I0520 13:11:21.144111  892584 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0520 13:11:21.144116  892584 command_runner.go:130] > # Where:
	I0520 13:11:21.144121  892584 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0520 13:11:21.144129  892584 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0520 13:11:21.144134  892584 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0520 13:11:21.144140  892584 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0520 13:11:21.144144  892584 command_runner.go:130] > #   in $PATH.
	I0520 13:11:21.144149  892584 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0520 13:11:21.144156  892584 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0520 13:11:21.144163  892584 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0520 13:11:21.144169  892584 command_runner.go:130] > #   state.
	I0520 13:11:21.144175  892584 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0520 13:11:21.144182  892584 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0520 13:11:21.144188  892584 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0520 13:11:21.144195  892584 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0520 13:11:21.144201  892584 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0520 13:11:21.144211  892584 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0520 13:11:21.144216  892584 command_runner.go:130] > #   The currently recognized values are:
	I0520 13:11:21.144222  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0520 13:11:21.144231  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0520 13:11:21.144236  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0520 13:11:21.144242  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0520 13:11:21.144250  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0520 13:11:21.144256  892584 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0520 13:11:21.144264  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0520 13:11:21.144270  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0520 13:11:21.144278  892584 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0520 13:11:21.144284  892584 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0520 13:11:21.144290  892584 command_runner.go:130] > #   deprecated option "conmon".
	I0520 13:11:21.144296  892584 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0520 13:11:21.144303  892584 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0520 13:11:21.144308  892584 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0520 13:11:21.144315  892584 command_runner.go:130] > #   should be moved to the container's cgroup
	I0520 13:11:21.144323  892584 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0520 13:11:21.144330  892584 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0520 13:11:21.144336  892584 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0520 13:11:21.144341  892584 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0520 13:11:21.144346  892584 command_runner.go:130] > #
	I0520 13:11:21.144350  892584 command_runner.go:130] > # Using the seccomp notifier feature:
	I0520 13:11:21.144353  892584 command_runner.go:130] > #
	I0520 13:11:21.144360  892584 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0520 13:11:21.144368  892584 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0520 13:11:21.144371  892584 command_runner.go:130] > #
	I0520 13:11:21.144379  892584 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0520 13:11:21.144389  892584 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0520 13:11:21.144392  892584 command_runner.go:130] > #
	I0520 13:11:21.144398  892584 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0520 13:11:21.144404  892584 command_runner.go:130] > # feature.
	I0520 13:11:21.144408  892584 command_runner.go:130] > #
	I0520 13:11:21.144415  892584 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0520 13:11:21.144421  892584 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0520 13:11:21.144429  892584 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0520 13:11:21.144438  892584 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0520 13:11:21.144446  892584 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0520 13:11:21.144450  892584 command_runner.go:130] > #
	I0520 13:11:21.144458  892584 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0520 13:11:21.144464  892584 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0520 13:11:21.144469  892584 command_runner.go:130] > #
	I0520 13:11:21.144475  892584 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0520 13:11:21.144482  892584 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0520 13:11:21.144486  892584 command_runner.go:130] > #
	I0520 13:11:21.144493  892584 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0520 13:11:21.144499  892584 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0520 13:11:21.144505  892584 command_runner.go:130] > # limitation.
	I0520 13:11:21.144509  892584 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0520 13:11:21.144515  892584 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0520 13:11:21.144519  892584 command_runner.go:130] > runtime_type = "oci"
	I0520 13:11:21.144524  892584 command_runner.go:130] > runtime_root = "/run/runc"
	I0520 13:11:21.144528  892584 command_runner.go:130] > runtime_config_path = ""
	I0520 13:11:21.144532  892584 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0520 13:11:21.144538  892584 command_runner.go:130] > monitor_cgroup = "pod"
	I0520 13:11:21.144542  892584 command_runner.go:130] > monitor_exec_cgroup = ""
	I0520 13:11:21.144548  892584 command_runner.go:130] > monitor_env = [
	I0520 13:11:21.144553  892584 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 13:11:21.144556  892584 command_runner.go:130] > ]
	I0520 13:11:21.144560  892584 command_runner.go:130] > privileged_without_host_devices = false
	I0520 13:11:21.144566  892584 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0520 13:11:21.144573  892584 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0520 13:11:21.144579  892584 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0520 13:11:21.144588  892584 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0520 13:11:21.144595  892584 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0520 13:11:21.144602  892584 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0520 13:11:21.144614  892584 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0520 13:11:21.144623  892584 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0520 13:11:21.144629  892584 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0520 13:11:21.144635  892584 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0520 13:11:21.144641  892584 command_runner.go:130] > # Example:
	I0520 13:11:21.144645  892584 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0520 13:11:21.144654  892584 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0520 13:11:21.144661  892584 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0520 13:11:21.144666  892584 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0520 13:11:21.144670  892584 command_runner.go:130] > # cpuset = 0
	I0520 13:11:21.144674  892584 command_runner.go:130] > # cpushares = "0-1"
	I0520 13:11:21.144679  892584 command_runner.go:130] > # Where:
	I0520 13:11:21.144683  892584 command_runner.go:130] > # The workload name is workload-type.
	I0520 13:11:21.144689  892584 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0520 13:11:21.144698  892584 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0520 13:11:21.144704  892584 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0520 13:11:21.144713  892584 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0520 13:11:21.144728  892584 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0520 13:11:21.144732  892584 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0520 13:11:21.144741  892584 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0520 13:11:21.144745  892584 command_runner.go:130] > # Default value is set to true
	I0520 13:11:21.144751  892584 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0520 13:11:21.144756  892584 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0520 13:11:21.144762  892584 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0520 13:11:21.144767  892584 command_runner.go:130] > # Default value is set to 'false'
	I0520 13:11:21.144771  892584 command_runner.go:130] > # disable_hostport_mapping = false
	I0520 13:11:21.144777  892584 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0520 13:11:21.144782  892584 command_runner.go:130] > #
	I0520 13:11:21.144787  892584 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0520 13:11:21.144792  892584 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0520 13:11:21.144798  892584 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0520 13:11:21.144803  892584 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0520 13:11:21.144808  892584 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0520 13:11:21.144811  892584 command_runner.go:130] > [crio.image]
	I0520 13:11:21.144816  892584 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0520 13:11:21.144820  892584 command_runner.go:130] > # default_transport = "docker://"
	I0520 13:11:21.144828  892584 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0520 13:11:21.144836  892584 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0520 13:11:21.144839  892584 command_runner.go:130] > # global_auth_file = ""
	I0520 13:11:21.144844  892584 command_runner.go:130] > # The image used to instantiate infra containers.
	I0520 13:11:21.144848  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.144852  892584 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0520 13:11:21.144859  892584 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0520 13:11:21.144864  892584 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0520 13:11:21.144868  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.144872  892584 command_runner.go:130] > # pause_image_auth_file = ""
	I0520 13:11:21.144877  892584 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0520 13:11:21.144883  892584 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0520 13:11:21.144888  892584 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0520 13:11:21.144893  892584 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0520 13:11:21.144897  892584 command_runner.go:130] > # pause_command = "/pause"
	I0520 13:11:21.144902  892584 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0520 13:11:21.144907  892584 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0520 13:11:21.144912  892584 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0520 13:11:21.144917  892584 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0520 13:11:21.144922  892584 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0520 13:11:21.144927  892584 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0520 13:11:21.144930  892584 command_runner.go:130] > # pinned_images = [
	I0520 13:11:21.144933  892584 command_runner.go:130] > # ]
	I0520 13:11:21.144939  892584 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0520 13:11:21.144944  892584 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0520 13:11:21.144952  892584 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0520 13:11:21.144957  892584 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0520 13:11:21.144962  892584 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0520 13:11:21.144965  892584 command_runner.go:130] > # signature_policy = ""
	I0520 13:11:21.144972  892584 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0520 13:11:21.144978  892584 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0520 13:11:21.144984  892584 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0520 13:11:21.144990  892584 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0520 13:11:21.144995  892584 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0520 13:11:21.145001  892584 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0520 13:11:21.145009  892584 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0520 13:11:21.145017  892584 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0520 13:11:21.145021  892584 command_runner.go:130] > # changing them here.
	I0520 13:11:21.145025  892584 command_runner.go:130] > # insecure_registries = [
	I0520 13:11:21.145028  892584 command_runner.go:130] > # ]
	I0520 13:11:21.145034  892584 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0520 13:11:21.145041  892584 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0520 13:11:21.145047  892584 command_runner.go:130] > # image_volumes = "mkdir"
	I0520 13:11:21.145052  892584 command_runner.go:130] > # Temporary directory to use for storing big files
	I0520 13:11:21.145056  892584 command_runner.go:130] > # big_files_temporary_dir = ""
	I0520 13:11:21.145063  892584 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0520 13:11:21.145066  892584 command_runner.go:130] > # CNI plugins.
	I0520 13:11:21.145071  892584 command_runner.go:130] > [crio.network]
	I0520 13:11:21.145076  892584 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0520 13:11:21.145081  892584 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0520 13:11:21.145085  892584 command_runner.go:130] > # cni_default_network = ""
	I0520 13:11:21.145090  892584 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0520 13:11:21.145094  892584 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0520 13:11:21.145099  892584 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0520 13:11:21.145102  892584 command_runner.go:130] > # plugin_dirs = [
	I0520 13:11:21.145105  892584 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0520 13:11:21.145108  892584 command_runner.go:130] > # ]
	I0520 13:11:21.145113  892584 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0520 13:11:21.145117  892584 command_runner.go:130] > [crio.metrics]
	I0520 13:11:21.145122  892584 command_runner.go:130] > # Globally enable or disable metrics support.
	I0520 13:11:21.145131  892584 command_runner.go:130] > enable_metrics = true
	I0520 13:11:21.145135  892584 command_runner.go:130] > # Specify enabled metrics collectors.
	I0520 13:11:21.145139  892584 command_runner.go:130] > # Per default all metrics are enabled.
	I0520 13:11:21.145146  892584 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0520 13:11:21.145154  892584 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0520 13:11:21.145160  892584 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0520 13:11:21.145166  892584 command_runner.go:130] > # metrics_collectors = [
	I0520 13:11:21.145169  892584 command_runner.go:130] > # 	"operations",
	I0520 13:11:21.145176  892584 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0520 13:11:21.145181  892584 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0520 13:11:21.145186  892584 command_runner.go:130] > # 	"operations_errors",
	I0520 13:11:21.145190  892584 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0520 13:11:21.145195  892584 command_runner.go:130] > # 	"image_pulls_by_name",
	I0520 13:11:21.145201  892584 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0520 13:11:21.145205  892584 command_runner.go:130] > # 	"image_pulls_failures",
	I0520 13:11:21.145209  892584 command_runner.go:130] > # 	"image_pulls_successes",
	I0520 13:11:21.145212  892584 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0520 13:11:21.145216  892584 command_runner.go:130] > # 	"image_layer_reuse",
	I0520 13:11:21.145221  892584 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0520 13:11:21.145229  892584 command_runner.go:130] > # 	"containers_oom_total",
	I0520 13:11:21.145233  892584 command_runner.go:130] > # 	"containers_oom",
	I0520 13:11:21.145236  892584 command_runner.go:130] > # 	"processes_defunct",
	I0520 13:11:21.145240  892584 command_runner.go:130] > # 	"operations_total",
	I0520 13:11:21.145243  892584 command_runner.go:130] > # 	"operations_latency_seconds",
	I0520 13:11:21.145248  892584 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0520 13:11:21.145252  892584 command_runner.go:130] > # 	"operations_errors_total",
	I0520 13:11:21.145256  892584 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0520 13:11:21.145263  892584 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0520 13:11:21.145267  892584 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0520 13:11:21.145273  892584 command_runner.go:130] > # 	"image_pulls_success_total",
	I0520 13:11:21.145278  892584 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0520 13:11:21.145284  892584 command_runner.go:130] > # 	"containers_oom_count_total",
	I0520 13:11:21.145289  892584 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0520 13:11:21.145295  892584 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0520 13:11:21.145298  892584 command_runner.go:130] > # ]
	I0520 13:11:21.145303  892584 command_runner.go:130] > # The port on which the metrics server will listen.
	I0520 13:11:21.145309  892584 command_runner.go:130] > # metrics_port = 9090
	I0520 13:11:21.145315  892584 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0520 13:11:21.145324  892584 command_runner.go:130] > # metrics_socket = ""
	I0520 13:11:21.145331  892584 command_runner.go:130] > # The certificate for the secure metrics server.
	I0520 13:11:21.145342  892584 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0520 13:11:21.145352  892584 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0520 13:11:21.145360  892584 command_runner.go:130] > # certificate on any modification event.
	I0520 13:11:21.145365  892584 command_runner.go:130] > # metrics_cert = ""
	I0520 13:11:21.145373  892584 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0520 13:11:21.145378  892584 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0520 13:11:21.145382  892584 command_runner.go:130] > # metrics_key = ""
	I0520 13:11:21.145388  892584 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0520 13:11:21.145394  892584 command_runner.go:130] > [crio.tracing]
	I0520 13:11:21.145399  892584 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0520 13:11:21.145402  892584 command_runner.go:130] > # enable_tracing = false
	I0520 13:11:21.145408  892584 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0520 13:11:21.145414  892584 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0520 13:11:21.145420  892584 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0520 13:11:21.145432  892584 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0520 13:11:21.145439  892584 command_runner.go:130] > # CRI-O NRI configuration.
	I0520 13:11:21.145445  892584 command_runner.go:130] > [crio.nri]
	I0520 13:11:21.145456  892584 command_runner.go:130] > # Globally enable or disable NRI.
	I0520 13:11:21.145461  892584 command_runner.go:130] > # enable_nri = false
	I0520 13:11:21.145468  892584 command_runner.go:130] > # NRI socket to listen on.
	I0520 13:11:21.145472  892584 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0520 13:11:21.145479  892584 command_runner.go:130] > # NRI plugin directory to use.
	I0520 13:11:21.145484  892584 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0520 13:11:21.145493  892584 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0520 13:11:21.145498  892584 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0520 13:11:21.145503  892584 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0520 13:11:21.145509  892584 command_runner.go:130] > # nri_disable_connections = false
	I0520 13:11:21.145515  892584 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0520 13:11:21.145522  892584 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0520 13:11:21.145530  892584 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0520 13:11:21.145534  892584 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0520 13:11:21.145539  892584 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0520 13:11:21.145545  892584 command_runner.go:130] > [crio.stats]
	I0520 13:11:21.145551  892584 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0520 13:11:21.145558  892584 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0520 13:11:21.145562  892584 command_runner.go:130] > # stats_collection_period = 0
	I0520 13:11:21.145597  892584 command_runner.go:130] ! time="2024-05-20 13:11:21.113853907Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0520 13:11:21.145611  892584 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0520 13:11:21.145713  892584 cni.go:84] Creating CNI manager for ""
	I0520 13:11:21.145729  892584 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 13:11:21.145746  892584 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:11:21.145767  892584 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-865571 NodeName:multinode-865571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:11:21.145913  892584 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-865571"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:11:21.145976  892584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:11:21.156009  892584 command_runner.go:130] > kubeadm
	I0520 13:11:21.156033  892584 command_runner.go:130] > kubectl
	I0520 13:11:21.156040  892584 command_runner.go:130] > kubelet
	I0520 13:11:21.156065  892584 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:11:21.156117  892584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:11:21.165394  892584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0520 13:11:21.182167  892584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:11:21.198614  892584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0520 13:11:21.215064  892584 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I0520 13:11:21.218894  892584 command_runner.go:130] > 192.168.39.78	control-plane.minikube.internal
	I0520 13:11:21.218964  892584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:11:21.350623  892584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:11:21.365944  892584 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571 for IP: 192.168.39.78
	I0520 13:11:21.365974  892584 certs.go:194] generating shared ca certs ...
	I0520 13:11:21.366009  892584 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:11:21.366186  892584 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 13:11:21.366224  892584 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 13:11:21.366234  892584 certs.go:256] generating profile certs ...
	I0520 13:11:21.366309  892584 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/client.key
	I0520 13:11:21.366369  892584 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.key.5cb03992
	I0520 13:11:21.366403  892584 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.key
	I0520 13:11:21.366414  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:11:21.366435  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:11:21.366447  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:11:21.366456  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:11:21.366466  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:11:21.366478  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:11:21.366487  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:11:21.366501  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:11:21.366559  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 13:11:21.366608  892584 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 13:11:21.366622  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 13:11:21.366655  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 13:11:21.366682  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:11:21.366703  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 13:11:21.366745  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:11:21.366773  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.366786  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.366799  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.367419  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:11:21.391623  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:11:21.415017  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:11:21.438401  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 13:11:21.462297  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 13:11:21.485042  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 13:11:21.508111  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:11:21.531614  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:11:21.554150  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 13:11:21.576977  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 13:11:21.600009  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:11:21.622760  892584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:11:21.638799  892584 ssh_runner.go:195] Run: openssl version
	I0520 13:11:21.644300  892584 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 13:11:21.644461  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 13:11:21.655069  892584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.659890  892584 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.659996  892584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.660058  892584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.665490  892584 command_runner.go:130] > 51391683
	I0520 13:11:21.665549  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 13:11:21.675879  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 13:11:21.687234  892584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.691394  892584 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.691543  892584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.691592  892584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.697018  892584 command_runner.go:130] > 3ec20f2e
	I0520 13:11:21.697197  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:11:21.706509  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:11:21.717337  892584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.721495  892584 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.721637  892584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.721690  892584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.727046  892584 command_runner.go:130] > b5213941
	I0520 13:11:21.727225  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:11:21.737177  892584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:11:21.741576  892584 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:11:21.741602  892584 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 13:11:21.741611  892584 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0520 13:11:21.741622  892584 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 13:11:21.741631  892584 command_runner.go:130] > Access: 2024-05-20 13:05:11.460730216 +0000
	I0520 13:11:21.741639  892584 command_runner.go:130] > Modify: 2024-05-20 13:05:11.460730216 +0000
	I0520 13:11:21.741651  892584 command_runner.go:130] > Change: 2024-05-20 13:05:11.460730216 +0000
	I0520 13:11:21.741658  892584 command_runner.go:130] >  Birth: 2024-05-20 13:05:11.460730216 +0000
	I0520 13:11:21.741713  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:11:21.747463  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.747541  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:11:21.753175  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.753243  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:11:21.759102  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.759176  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:11:21.765951  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.766004  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:11:21.772016  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.772226  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:11:21.777726  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.777965  892584 kubeadm.go:391] StartCluster: {Name:multinode-865571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-865571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.160 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:11:21.778085  892584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:11:21.778118  892584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:11:21.812887  892584 command_runner.go:130] > 9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063
	I0520 13:11:21.812926  892584 command_runner.go:130] > 49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380
	I0520 13:11:21.812933  892584 command_runner.go:130] > ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a
	I0520 13:11:21.812940  892584 command_runner.go:130] > 69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947
	I0520 13:11:21.812947  892584 command_runner.go:130] > 06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b
	I0520 13:11:21.812952  892584 command_runner.go:130] > 0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7
	I0520 13:11:21.812957  892584 command_runner.go:130] > 5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483
	I0520 13:11:21.813098  892584 command_runner.go:130] > e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66
	I0520 13:11:21.814377  892584 cri.go:89] found id: "9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063"
	I0520 13:11:21.814399  892584 cri.go:89] found id: "49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380"
	I0520 13:11:21.814404  892584 cri.go:89] found id: "ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a"
	I0520 13:11:21.814409  892584 cri.go:89] found id: "69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947"
	I0520 13:11:21.814413  892584 cri.go:89] found id: "06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b"
	I0520 13:11:21.814418  892584 cri.go:89] found id: "0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7"
	I0520 13:11:21.814422  892584 cri.go:89] found id: "5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483"
	I0520 13:11:21.814427  892584 cri.go:89] found id: "e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66"
	I0520 13:11:21.814431  892584 cri.go:89] found id: ""
	I0520 13:11:21.814471  892584 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.216264412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fab2fb80-c9e9-4a07-8a7f-6f65e923d0b4 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.217216565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f4a0de9fef6e7eb40bbae0932c0b136d37563d0d273170321cb141d28f63823,PodSandboxId:cf6dd7caebc6815f5cc7d2c39e045a53f28cf68ee717dd51e9412ebbab25777d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716210722437237035,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95,PodSandboxId:1e895fbf4fd2cd9228480eb84b885b285bed03c87e0cf398f167fca9686cb5be,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716210688936835109,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2,PodSandboxId:db2c0c5a5df49c7cac506459071fb592436354190245b99b756b98004bba0f6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716210688896796641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5,PodSandboxId:4008986e60e56362fa42182b91af214be6591c4a47a18868dc02951ec151695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716210688812495630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,},Annotations:map[string]
string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94037199ce629680785f1e448a8913e82e2f4426efc0940ed47d6cf365a5c0ce,PodSandboxId:1262dc3c426275b4bfd5dbd429a3c530eafcd626838226f1a931d49bc0ff86fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716210688735139812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.ku
bernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1,PodSandboxId:77bbfc6a88c3b294519da665cff3dc98ded6fe9cb8d861c694844f066ffb3537,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210683945583978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab,PodSandboxId:33d2ad28bdad12aab8711a3a5e632a7f39bad95a6ea41d70feaef10727df08a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210683870216754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7
02264,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed,PodSandboxId:2db53cc70394a7b78dd4ffab8cc12c10e3c78b7b9852a34ee6bf3aa76b4db655,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716210683869191601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099,PodSandboxId:ebacc9eb00e839935555b01c0ef909035e8579ac2a974dbe9982b8f1dd4fb61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210683792239408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.container.hash: 321d3fc8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a58ddb4bc5ae1e43a201f39acb74b3fc8eb3fc621b2ae13717afc9bd73ff76,PodSandboxId:2fae850c319d2831daba8976c8d688aa5a415f6c4b50f21360dfc1771fc69c2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716210381937703531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380,PodSandboxId:ee49075e2aa277e55a64a2d1a1ab70d7bc2e5333fcdb399d3d805563b63a5c6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716210337721235776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.kubernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063,PodSandboxId:f64711606f5e8e7959201c4168a3b44e2a179bd249814ed1dc122ca1fbee5d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716210337726240942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947,PodSandboxId:cd500b16c8cb83efdfd493cef3a827c9600057dbc63d4f7b7e0b681b63adb8f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716210336013827850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a,PodSandboxId:4901288e3b49a2b62ace04da2fefb6461e54d99dba437822aa935c2310ced8f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716210336035187991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18
-21245055c769,},Annotations:map[string]string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7,PodSandboxId:dc55454121a3d0ceee2325b613b17580033d98a74270d9d3b937a953f196af5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716210315376725938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b,PodSandboxId:c9f1dcd9b10860776667a2e5c4934b06f2b34696b4308868b9241c5d82e8273c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210315418737817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483,PodSandboxId:b54986d6b3c407e4fbf53eec3fcfe3d9eb9a6a8063de4ae7f03ed0b2ce3387f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716210315371140949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},An
notations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66,PodSandboxId:7b72f9c76df4c75fe8b00188c1b48201c17d787517ebc2af306e3038acf01fc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716210315267220057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 321d3fc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fab2fb80-c9e9-4a07-8a7f-6f65e923d0b4 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.223358940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d77b358-55bd-48f2-82a2-f682b4725af1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.223655761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d77b358-55bd-48f2-82a2-f682b4725af1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.224010494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f4a0de9fef6e7eb40bbae0932c0b136d37563d0d273170321cb141d28f63823,PodSandboxId:cf6dd7caebc6815f5cc7d2c39e045a53f28cf68ee717dd51e9412ebbab25777d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716210722437237035,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95,PodSandboxId:1e895fbf4fd2cd9228480eb84b885b285bed03c87e0cf398f167fca9686cb5be,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716210688936835109,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2,PodSandboxId:db2c0c5a5df49c7cac506459071fb592436354190245b99b756b98004bba0f6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716210688896796641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5,PodSandboxId:4008986e60e56362fa42182b91af214be6591c4a47a18868dc02951ec151695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716210688812495630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,},Annotations:map[string]
string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94037199ce629680785f1e448a8913e82e2f4426efc0940ed47d6cf365a5c0ce,PodSandboxId:1262dc3c426275b4bfd5dbd429a3c530eafcd626838226f1a931d49bc0ff86fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716210688735139812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.ku
bernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1,PodSandboxId:77bbfc6a88c3b294519da665cff3dc98ded6fe9cb8d861c694844f066ffb3537,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210683945583978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab,PodSandboxId:33d2ad28bdad12aab8711a3a5e632a7f39bad95a6ea41d70feaef10727df08a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210683870216754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7
02264,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed,PodSandboxId:2db53cc70394a7b78dd4ffab8cc12c10e3c78b7b9852a34ee6bf3aa76b4db655,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716210683869191601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099,PodSandboxId:ebacc9eb00e839935555b01c0ef909035e8579ac2a974dbe9982b8f1dd4fb61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210683792239408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.container.hash: 321d3fc8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a58ddb4bc5ae1e43a201f39acb74b3fc8eb3fc621b2ae13717afc9bd73ff76,PodSandboxId:2fae850c319d2831daba8976c8d688aa5a415f6c4b50f21360dfc1771fc69c2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716210381937703531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380,PodSandboxId:ee49075e2aa277e55a64a2d1a1ab70d7bc2e5333fcdb399d3d805563b63a5c6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716210337721235776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.kubernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063,PodSandboxId:f64711606f5e8e7959201c4168a3b44e2a179bd249814ed1dc122ca1fbee5d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716210337726240942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947,PodSandboxId:cd500b16c8cb83efdfd493cef3a827c9600057dbc63d4f7b7e0b681b63adb8f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716210336013827850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a,PodSandboxId:4901288e3b49a2b62ace04da2fefb6461e54d99dba437822aa935c2310ced8f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716210336035187991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18
-21245055c769,},Annotations:map[string]string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7,PodSandboxId:dc55454121a3d0ceee2325b613b17580033d98a74270d9d3b937a953f196af5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716210315376725938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b,PodSandboxId:c9f1dcd9b10860776667a2e5c4934b06f2b34696b4308868b9241c5d82e8273c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210315418737817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483,PodSandboxId:b54986d6b3c407e4fbf53eec3fcfe3d9eb9a6a8063de4ae7f03ed0b2ce3387f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716210315371140949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},An
notations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66,PodSandboxId:7b72f9c76df4c75fe8b00188c1b48201c17d787517ebc2af306e3038acf01fc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716210315267220057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 321d3fc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d77b358-55bd-48f2-82a2-f682b4725af1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.224983764Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7f4a0de9fef6e7eb40bbae0932c0b136d37563d0d273170321cb141d28f63823,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5851ec85-aa76-43a2-9add-529fea9acde8 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.225077957Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7f4a0de9fef6e7eb40bbae0932c0b136d37563d0d273170321cb141d28f63823,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210722489328945,StartedAt:1716210722515850208,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/55131023-9fdc-4c5b-86f3-0963e13b54c2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/55131023-9fdc-4c5b-86f3-0963e13b54c2/containers/busybox/a7e13f1f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/55131023-9fdc-4c5b-86f3-0963e13b54c2/volumes/kubernetes.io~projected/kube-api-access-l8q9g,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox-fc5497c4f-c8hj2_55131023-9fdc-4c5b-86f3-0963e13b54c2/busybox/1.log,Resources:&Co
ntainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5851ec85-aa76-43a2-9add-529fea9acde8 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.225891220Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7c3b8300-4d79-434f-9ed9-71fb1723b678 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.226018704Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210689178302283,StartedAt:1716210689214909633,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240202-8f1494ea,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a05815a1-89f4-4adf-88f3-d85b1c969cd6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a05815a1-89f4-4adf-88f3-d85b1c969cd6/containers/kindnet-cni/665e29c1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath
:/etc/cni/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/a05815a1-89f4-4adf-88f3-d85b1c969cd6/volumes/kubernetes.io~projected/kube-api-access-xhz2n,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-p69ft_a05815a1-89f4-4adf-88f3-d85b1c969cd6/kindnet-cni/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7c3b8300-4d79-434f-9ed9-71fb17
23b678 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.226841483Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2,Verbose:false,}" file="otel-collector/interceptors.go:62" id=bced0bc2-ed6f-4477-88d1-d75358553ba0 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.227280685Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210689034141973,StartedAt:1716210689088899059,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"co
ntainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/2bdfbdfb-82cd-402d-9ec5-42adc84fa06c/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/2bdfbdfb-82cd-402d-9ec5-42adc84fa06c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/2bdfbdfb-82cd-402d-9ec5-42adc84fa06c/containers/coredns/aa608bb1,Readonly:false,SelinuxRelabel:false,Propagation
:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/2bdfbdfb-82cd-402d-9ec5-42adc84fa06c/volumes/kubernetes.io~projected/kube-api-access-z2vk2,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-cck8j_2bdfbdfb-82cd-402d-9ec5-42adc84fa06c/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=bced0bc2-ed6f-4477-88d1-d75358553ba0 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.230234314Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8ea9e032-9078-4464-a2af-fcf72042d7b0 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.230612598Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210688977482924,StartedAt:1716210689085553443,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,},Annotations:map[string]string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/826e8825-487e-4a9e-8a18-21245055c769/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/826e8825-487e-4a9e-8a18-21245055c769/containers/kube-proxy/15b0f51e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/
lib/kubelet/pods/826e8825-487e-4a9e-8a18-21245055c769/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/826e8825-487e-4a9e-8a18-21245055c769/volumes/kubernetes.io~projected/kube-api-access-m6j5m,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-z8dbs_826e8825-487e-4a9e-8a18-21245055c769/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-c
ollector/interceptors.go:74" id=8ea9e032-9078-4464-a2af-fcf72042d7b0 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.230764005Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=bac24714-9ac2-4f3f-9060-3969a386cdb2 name=/runtime.v1.RuntimeService/Status
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.230826423Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=bac24714-9ac2-4f3f-9060-3969a386cdb2 name=/runtime.v1.RuntimeService/Status
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.231153386Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:94037199ce629680785f1e448a8913e82e2f4426efc0940ed47d6cf365a5c0ce,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d70f9865-6418-4816-852e-c576be66e20c name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.231344944Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:94037199ce629680785f1e448a8913e82e2f4426efc0940ed47d6cf365a5c0ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210688855027962,StartedAt:1716210688970761597,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.kubernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b9037bf4-865b-4ef6-8138-1a3c6a8d1500/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b9037bf4-865b-4ef6-8138-1a3c6a8d1500/containers/storage-provisioner/50f5745e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/b9037bf4-865b-4ef6-8138-1a3c6a8d1500/volumes/kubernetes.io~projected/kube-api-access-m8tft,Readonly:true,SelinuxRelabel:fals
e,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_b9037bf4-865b-4ef6-8138-1a3c6a8d1500/storage-provisioner/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d70f9865-6418-4816-852e-c576be66e20c name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.231758611Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1,Verbose:false,}" file="otel-collector/interceptors.go:62" id=af2989c6-c5ed-429f-8b0c-5e9866e9a50d name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.231839653Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210684064226588,StartedAt:1716210684201218988,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a28ed0baba5785958bfc3b772e1e289e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a28ed0baba5785958bfc3b772e1e289e/containers/kube-scheduler/5d6290bd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-multinode-865571_a28ed0baba5785958bfc3b772e1e289e/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeri
od:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=af2989c6-c5ed-429f-8b0c-5e9866e9a50d name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.232233165Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a69ddbc3-a9fd-41a3-ad27-9f820c94123d name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.232323304Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210683983041047,StartedAt:1716210684082777280,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/aefcc152b93d64e03162596bcb208fb1/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/aefcc152b93d64e03162596bcb208fb1/containers/kube-apiserver/4e8e82bb,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Containe
rPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-multinode-865571_aefcc152b93d64e03162596bcb208fb1/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a69ddbc3-a9fd-41a3-ad27-9f820c94123d name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.232868274Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed,Verbose:false,}" file="otel-collector/interceptors.go:62" id=2e6c0fc8-4031-4f7b-87db-6524699f4cfd name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.233157921Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210683915944827,StartedAt:1716210684003527156,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/bea551ee8f74628c5c3ff37e899e26a0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/bea551ee8f74628c5c3ff37e899e26a0/containers/kube-controller-manager/6fa3e900,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,
UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-multinode-865571_bea551ee8f74628c5c3ff37e899e26a0/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMem
s:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2e6c0fc8-4031-4f7b-87db-6524699f4cfd name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.233682985Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d968b3a5-1d60-468b-9266-0138c7d1e583 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 13:12:43 multinode-865571 crio[2863]: time="2024-05-20 13:12:43.233770197Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1716210683863915825,StartedAt:1716210683980306742,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.container.hash: 321d3fc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/001f5f73c09833ac52c0fd669fee7361/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/001f5f73c09833ac52c0fd669fee7361/containers/etcd/bb1e4062,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-m
ultinode-865571_001f5f73c09833ac52c0fd669fee7361/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d968b3a5-1d60-468b-9266-0138c7d1e583 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7f4a0de9fef6e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      40 seconds ago       Running             busybox                   1                   cf6dd7caebc68       busybox-fc5497c4f-c8hj2
	ca6e5c0b3bc62       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   1e895fbf4fd2c       kindnet-p69ft
	bcfb651082e6f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   db2c0c5a5df49       coredns-7db6d8ff4d-cck8j
	25ca0eed2cac1       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   4008986e60e56       kube-proxy-z8dbs
	94037199ce629       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   1262dc3c42627       storage-provisioner
	cf4d2cd83a9cd       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   77bbfc6a88c3b       kube-scheduler-multinode-865571
	c3686a8518528       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   33d2ad28bdad1       kube-apiserver-multinode-865571
	3d4a3b19bb8e9       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   2db53cc70394a       kube-controller-manager-multinode-865571
	00722f6248827       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   ebacc9eb00e83       etcd-multinode-865571
	26a58ddb4bc5a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   2fae850c319d2       busybox-fc5497c4f-c8hj2
	9b374e240a6cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   f64711606f5e8       coredns-7db6d8ff4d-cck8j
	49209f7e35c79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   ee49075e2aa27       storage-provisioner
	ae13e8e8db5a4       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   4901288e3b49a       kube-proxy-z8dbs
	69415b4290f14       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   cd500b16c8cb8       kindnet-p69ft
	06e853ffdd1f3       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   c9f1dcd9b1086       kube-apiserver-multinode-865571
	0332c5cdab59d       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   dc55454121a3d       kube-scheduler-multinode-865571
	5e94c8b3558a8       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   b54986d6b3c40       kube-controller-manager-multinode-865571
	e379bbf0ff586       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   7b72f9c76df4c       etcd-multinode-865571
	
	
	==> coredns [9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063] <==
	[INFO] 10.244.0.3:58828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001650207s
	[INFO] 10.244.0.3:53349 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101685s
	[INFO] 10.244.0.3:53577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00005782s
	[INFO] 10.244.0.3:54163 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000932243s
	[INFO] 10.244.0.3:43945 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000134913s
	[INFO] 10.244.0.3:47395 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053743s
	[INFO] 10.244.0.3:32849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080995s
	[INFO] 10.244.1.2:43178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121678s
	[INFO] 10.244.1.2:56268 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075606s
	[INFO] 10.244.1.2:44888 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065126s
	[INFO] 10.244.1.2:57864 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097621s
	[INFO] 10.244.0.3:55327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104835s
	[INFO] 10.244.0.3:55984 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064871s
	[INFO] 10.244.0.3:47136 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094416s
	[INFO] 10.244.0.3:42003 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049721s
	[INFO] 10.244.1.2:44612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115061s
	[INFO] 10.244.1.2:33740 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179295s
	[INFO] 10.244.1.2:49252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097268s
	[INFO] 10.244.1.2:42925 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000201463s
	[INFO] 10.244.0.3:51539 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074001s
	[INFO] 10.244.0.3:58314 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000056116s
	[INFO] 10.244.0.3:52703 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000039389s
	[INFO] 10.244.0.3:49801 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000029311s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51456 - 39602 "HINFO IN 7001816019168731813.6639323213373340617. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018385616s
	
	
	==> describe nodes <==
	Name:               multinode-865571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-865571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=multinode-865571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_05_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:05:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-865571
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:12:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:11:27 +0000   Mon, 20 May 2024 13:05:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:11:27 +0000   Mon, 20 May 2024 13:05:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:11:27 +0000   Mon, 20 May 2024 13:05:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:11:27 +0000   Mon, 20 May 2024 13:05:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-865571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fc2d737a0984208b366b4fc8aa543ec
	  System UUID:                6fc2d737-a098-4208-b366-b4fc8aa543ec
	  Boot ID:                    98d576f3-e9e6-429a-b515-0222cfdb89ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-c8hj2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 coredns-7db6d8ff4d-cck8j                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m9s
	  kube-system                 etcd-multinode-865571                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m23s
	  kube-system                 kindnet-p69ft                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m9s
	  kube-system                 kube-apiserver-multinode-865571             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-controller-manager-multinode-865571    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-proxy-z8dbs                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-scheduler-multinode-865571             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m7s                   kube-proxy       
	  Normal  Starting                 74s                    kube-proxy       
	  Normal  Starting                 7m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m29s (x8 over 7m29s)  kubelet          Node multinode-865571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x8 over 7m29s)  kubelet          Node multinode-865571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s (x7 over 7m29s)  kubelet          Node multinode-865571 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m23s                  kubelet          Node multinode-865571 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m23s                  kubelet          Node multinode-865571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m23s                  kubelet          Node multinode-865571 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m23s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m10s                  node-controller  Node multinode-865571 event: Registered Node multinode-865571 in Controller
	  Normal  NodeReady                7m6s                   kubelet          Node multinode-865571 status is now: NodeReady
	  Normal  Starting                 80s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)      kubelet          Node multinode-865571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)      kubelet          Node multinode-865571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)      kubelet          Node multinode-865571 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           64s                    node-controller  Node multinode-865571 event: Registered Node multinode-865571 in Controller
	
	
	Name:               multinode-865571-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-865571-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=multinode-865571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_12_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:12:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-865571-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:12:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:12:38 +0000   Mon, 20 May 2024 13:12:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:12:38 +0000   Mon, 20 May 2024 13:12:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:12:38 +0000   Mon, 20 May 2024 13:12:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:12:38 +0000   Mon, 20 May 2024 13:12:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    multinode-865571-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 da55c52cb4d14d08a06e12ee1db3a0fe
	  System UUID:                da55c52c-b4d1-4d08-a06e-12ee1db3a0fe
	  Boot ID:                    2c3d7f87-9f65-413a-a97a-d130b737936f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d52mq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kindnet-zp4xs              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m34s
	  kube-system                 kube-proxy-pntzt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m28s                  kube-proxy  
	  Normal  Starting                 31s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 6m34s)  kubelet     Node multinode-865571-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 6m34s)  kubelet     Node multinode-865571-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 6m34s)  kubelet     Node multinode-865571-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m25s                  kubelet     Node multinode-865571-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  36s (x2 over 36s)      kubelet     Node multinode-865571-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x2 over 36s)      kubelet     Node multinode-865571-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x2 over 36s)      kubelet     Node multinode-865571-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                29s                    kubelet     Node multinode-865571-m02 status is now: NodeReady
	
	
	Name:               multinode-865571-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-865571-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=multinode-865571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_12_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:12:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-865571-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:12:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:12:40 +0000   Mon, 20 May 2024 13:12:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:12:40 +0000   Mon, 20 May 2024 13:12:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:12:40 +0000   Mon, 20 May 2024 13:12:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:12:40 +0000   Mon, 20 May 2024 13:12:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    multinode-865571-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 35b807a7b74a43e1867cad65ceb51ac8
	  System UUID:                35b807a7-b74a-43e1-867c-ad65ceb51ac8
	  Boot ID:                    32f515f7-2c0f-4254-bded-02498ff8d7b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x2f5v       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m51s
	  kube-system                 kube-proxy-smmdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m45s                  kube-proxy  
	  Normal  Starting                 6s                     kube-proxy  
	  Normal  Starting                 5m9s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  5m51s (x2 over 5m51s)  kubelet     Node multinode-865571-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s (x2 over 5m51s)  kubelet     Node multinode-865571-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s (x2 over 5m51s)  kubelet     Node multinode-865571-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m51s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m42s                  kubelet     Node multinode-865571-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m13s (x2 over 5m13s)  kubelet     Node multinode-865571-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x2 over 5m13s)  kubelet     Node multinode-865571-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m13s (x2 over 5m13s)  kubelet     Node multinode-865571-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m13s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m7s                   kubelet     Node multinode-865571-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  10s (x2 over 10s)      kubelet     Node multinode-865571-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x2 over 10s)      kubelet     Node multinode-865571-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x2 over 10s)      kubelet     Node multinode-865571-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-865571-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059267] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059472] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.188674] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.110989] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.261702] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.131122] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.719255] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.063826] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.982838] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.079635] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.576272] systemd-fstab-generator[1490]: Ignoring "noauto" option for root device
	[  +0.100968] kauditd_printk_skb: 21 callbacks suppressed
	[May20 13:06] kauditd_printk_skb: 82 callbacks suppressed
	[May20 13:11] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.142158] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.159805] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.148025] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.276962] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +6.977805] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[  +0.083228] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.571542] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[  +5.671934] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.659013] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.314619] systemd-fstab-generator[3875]: Ignoring "noauto" option for root device
	[May20 13:12] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099] <==
	{"level":"info","ts":"2024-05-20T13:11:24.159856Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:11:24.159865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:11:24.160083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 switched to configuration voters=(9511011272858222243)"}
	{"level":"info","ts":"2024-05-20T13:11:24.160148Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","added-peer-id":"83fde65c75733ea3","added-peer-peer-urls":["https://192.168.39.78:2380"]}
	{"level":"info","ts":"2024-05-20T13:11:24.160242Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:11:24.160281Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:11:24.175124Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:11:24.175308Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"83fde65c75733ea3","initial-advertise-peer-urls":["https://192.168.39.78:2380"],"listen-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.78:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T13:11:24.175358Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T13:11:24.175909Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-20T13:11:24.175941Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-20T13:11:25.620486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T13:11:25.620522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T13:11:25.620571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 2"}
	{"level":"info","ts":"2024-05-20T13:11:25.620585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T13:11:25.620591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgVoteResp from 83fde65c75733ea3 at term 3"}
	{"level":"info","ts":"2024-05-20T13:11:25.62061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T13:11:25.62062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83fde65c75733ea3 elected leader 83fde65c75733ea3 at term 3"}
	{"level":"info","ts":"2024-05-20T13:11:25.623313Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"83fde65c75733ea3","local-member-attributes":"{Name:multinode-865571 ClientURLs:[https://192.168.39.78:2379]}","request-path":"/0/members/83fde65c75733ea3/attributes","cluster-id":"254f9db842b1870b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T13:11:25.623479Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:11:25.623565Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:11:25.623595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T13:11:25.623509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:11:25.62576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.78:2379"}
	{"level":"info","ts":"2024-05-20T13:11:25.625886Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66] <==
	{"level":"info","ts":"2024-05-20T13:05:15.951566Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:05:15.951597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T13:05:15.955473Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:05:15.9556Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:05:15.955693Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:05:15.957146Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.78:2379"}
	{"level":"warn","ts":"2024-05-20T13:06:09.860745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.722161ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4513609126419432577 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-865571-m02.17d1343cff565fad\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-865571-m02.17d1343cff565fad\" value_size:646 lease:4513609126419431663 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T13:06:09.861334Z","caller":"traceutil/trace.go:171","msg":"trace[1115152632] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"249.956894ms","start":"2024-05-20T13:06:09.611357Z","end":"2024-05-20T13:06:09.861314Z","steps":["trace[1115152632] 'process raft request'  (duration: 89.08972ms)","trace[1115152632] 'compare'  (duration: 159.620008ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:06:09.861572Z","caller":"traceutil/trace.go:171","msg":"trace[1266436582] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"177.534117ms","start":"2024-05-20T13:06:09.684028Z","end":"2024-05-20T13:06:09.861562Z","steps":["trace[1266436582] 'process raft request'  (duration: 177.210527ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:06:52.82417Z","caller":"traceutil/trace.go:171","msg":"trace[1236714383] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"249.150681ms","start":"2024-05-20T13:06:52.574981Z","end":"2024-05-20T13:06:52.824132Z","steps":["trace[1236714383] 'process raft request'  (duration: 227.208873ms)","trace[1236714383] 'compare'  (duration: 21.721199ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:06:52.825718Z","caller":"traceutil/trace.go:171","msg":"trace[197194243] linearizableReadLoop","detail":"{readStateIndex:637; appliedIndex:635; }","duration":"189.261631ms","start":"2024-05-20T13:06:52.636445Z","end":"2024-05-20T13:06:52.825706Z","steps":["trace[197194243] 'read index received'  (duration: 165.859729ms)","trace[197194243] 'applied index is now lower than readState.Index'  (duration: 23.401433ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:06:52.825849Z","caller":"traceutil/trace.go:171","msg":"trace[1707923856] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"200.10424ms","start":"2024-05-20T13:06:52.625738Z","end":"2024-05-20T13:06:52.825842Z","steps":["trace[1707923856] 'process raft request'  (duration: 198.474205ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:06:52.826113Z","caller":"traceutil/trace.go:171","msg":"trace[755531351] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"167.449051ms","start":"2024-05-20T13:06:52.658658Z","end":"2024-05-20T13:06:52.826107Z","steps":["trace[755531351] 'process raft request'  (duration: 165.594197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:06:52.826425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.844398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-865571-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-20T13:06:52.826491Z","caller":"traceutil/trace.go:171","msg":"trace[1260058095] range","detail":"{range_begin:/registry/minions/multinode-865571-m03; range_end:; response_count:1; response_revision:604; }","duration":"190.117555ms","start":"2024-05-20T13:06:52.636359Z","end":"2024-05-20T13:06:52.826477Z","steps":["trace[1260058095] 'agreement among raft nodes before linearized reading'  (duration: 189.904815ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:09:42.288119Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-20T13:09:42.288286Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-865571","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	{"level":"warn","ts":"2024-05-20T13:09:42.288507Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T13:09:42.288592Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T13:09:42.322223Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T13:09:42.32231Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T13:09:42.323983Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"83fde65c75733ea3","current-leader-member-id":"83fde65c75733ea3"}
	{"level":"info","ts":"2024-05-20T13:09:42.328078Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-20T13:09:42.328182Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-20T13:09:42.328208Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-865571","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> kernel <==
	 13:12:43 up 8 min,  0 users,  load average: 1.17, 0.50, 0.22
	Linux multinode-865571 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947] <==
	I0520 13:08:57.148918       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:09:07.162542       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:09:07.162730       1 main.go:227] handling current node
	I0520 13:09:07.162772       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:09:07.162802       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:09:07.162950       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:09:07.162990       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:09:17.167349       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:09:17.167478       1 main.go:227] handling current node
	I0520 13:09:17.167503       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:09:17.167522       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:09:17.167635       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:09:17.167655       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:09:27.180589       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:09:27.180715       1 main.go:227] handling current node
	I0520 13:09:27.180755       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:09:27.180774       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:09:27.180934       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:09:27.181028       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:09:37.185724       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:09:37.185959       1 main.go:227] handling current node
	I0520 13:09:37.185997       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:09:37.186018       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:09:37.186133       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:09:37.186152       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95] <==
	I0520 13:11:59.788516       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:12:09.793087       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:12:09.793131       1 main.go:227] handling current node
	I0520 13:12:09.793148       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:12:09.793154       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:12:09.793280       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:12:09.793307       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:12:19.802538       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:12:19.802762       1 main.go:227] handling current node
	I0520 13:12:19.802812       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:12:19.802834       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:12:19.803052       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:12:19.803135       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:12:29.815230       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:12:29.815475       1 main.go:227] handling current node
	I0520 13:12:29.815532       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:12:29.815566       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:12:29.815767       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:12:29.815801       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:12:39.829779       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:12:39.829869       1 main.go:227] handling current node
	I0520 13:12:39.829901       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:12:39.829919       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:12:39.830051       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:12:39.830077       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b] <==
	W0520 13:09:42.309235       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.309265       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.309295       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.311923       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.311992       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312022       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312050       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312089       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312119       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312144       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312171       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312197       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312224       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312254       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312282       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312310       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312336       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312366       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312502       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312545       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312575       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312604       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312633       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312811       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312921       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab] <==
	I0520 13:11:26.928921       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:11:26.932073       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 13:11:26.932172       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 13:11:26.932195       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 13:11:26.932982       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:11:26.933966       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 13:11:26.934070       1 shared_informer.go:320] Caches are synced for configmaps
	E0520 13:11:26.941094       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 13:11:26.957199       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 13:11:26.957261       1 aggregator.go:165] initial CRD sync complete...
	I0520 13:11:26.957286       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 13:11:26.957309       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 13:11:26.957336       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:11:26.962637       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 13:11:26.965989       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:11:26.966068       1 policy_source.go:224] refreshing policies
	I0520 13:11:26.992523       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 13:11:27.835296       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 13:11:29.301653       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 13:11:29.495819       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 13:11:29.512176       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 13:11:29.581889       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 13:11:29.589604       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 13:11:40.171525       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 13:11:40.214262       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed] <==
	I0520 13:11:40.566468       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 13:11:40.588158       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 13:12:03.835650       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.307229ms"
	I0520 13:12:03.835767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.355µs"
	I0520 13:12:03.848521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.82916ms"
	I0520 13:12:03.848711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.864µs"
	I0520 13:12:07.985873       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m02\" does not exist"
	I0520 13:12:07.994956       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m02" podCIDRs=["10.244.1.0/24"]
	I0520 13:12:09.883244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.661µs"
	I0520 13:12:09.920628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.974µs"
	I0520 13:12:09.933930       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.117µs"
	I0520 13:12:09.949000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.076µs"
	I0520 13:12:09.956058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.386µs"
	I0520 13:12:09.958261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.413µs"
	I0520 13:12:10.987850       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.843µs"
	I0520 13:12:14.811234       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:12:14.835151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.887µs"
	I0520 13:12:14.848606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.335µs"
	I0520 13:12:16.311161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.144139ms"
	I0520 13:12:16.312725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.626µs"
	I0520 13:12:32.985952       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:12:33.942186       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:12:33.943166       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m03\" does not exist"
	I0520 13:12:33.955949       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m03" podCIDRs=["10.244.2.0/24"]
	I0520 13:12:40.259959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	
	
	==> kube-controller-manager [5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483] <==
	I0520 13:06:09.869135       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m02\" does not exist"
	I0520 13:06:09.889752       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m02" podCIDRs=["10.244.1.0/24"]
	I0520 13:06:13.815786       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-865571-m02"
	I0520 13:06:18.256482       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:06:20.458109       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.038122ms"
	I0520 13:06:20.488875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.605716ms"
	I0520 13:06:20.507048       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.591366ms"
	I0520 13:06:20.507317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.81µs"
	I0520 13:06:22.671227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.326535ms"
	I0520 13:06:22.672459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.269µs"
	I0520 13:06:22.952135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.528878ms"
	I0520 13:06:22.952599       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.285µs"
	I0520 13:06:52.825625       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m03\" does not exist"
	I0520 13:06:52.826040       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:06:52.842266       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m03" podCIDRs=["10.244.2.0/24"]
	I0520 13:06:53.829784       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-865571-m03"
	I0520 13:07:01.092247       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m03"
	I0520 13:07:29.437229       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:07:30.728768       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:07:30.730011       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m03\" does not exist"
	I0520 13:07:30.740288       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m03" podCIDRs=["10.244.3.0/24"]
	I0520 13:07:36.699305       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:08:13.879348       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m03"
	I0520 13:08:13.923634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.78526ms"
	I0520 13:08:13.924844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.739µs"
	
	
	==> kube-proxy [25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5] <==
	I0520 13:11:29.238794       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:11:29.263244       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.78"]
	I0520 13:11:29.321107       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:11:29.321159       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:11:29.321197       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:11:29.331906       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:11:29.332177       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:11:29.332223       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:11:29.334020       1 config.go:192] "Starting service config controller"
	I0520 13:11:29.334068       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:11:29.334093       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:11:29.334097       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:11:29.335262       1 config.go:319] "Starting node config controller"
	I0520 13:11:29.335336       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:11:29.434503       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:11:29.434610       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:11:29.435851       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a] <==
	I0520 13:05:36.185761       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:05:36.194549       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.78"]
	I0520 13:05:36.229787       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:05:36.229872       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:05:36.229888       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:05:36.232708       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:05:36.233022       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:05:36.233055       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:05:36.234357       1 config.go:192] "Starting service config controller"
	I0520 13:05:36.234460       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:05:36.234489       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:05:36.234493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:05:36.235232       1 config.go:319] "Starting node config controller"
	I0520 13:05:36.235263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:05:36.335612       1 shared_informer.go:320] Caches are synced for node config
	I0520 13:05:36.335643       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:05:36.335696       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7] <==
	E0520 13:05:19.170562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 13:05:19.174277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 13:05:19.174571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 13:05:19.219646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 13:05:19.219804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 13:05:19.281620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:05:19.281935       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:05:19.317090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 13:05:19.318036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 13:05:19.408109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 13:05:19.408196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 13:05:19.427492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:05:19.427541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 13:05:19.438119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:05:19.438167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:05:19.482155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 13:05:19.482241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:05:19.482256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 13:05:19.482496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 13:05:19.498187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 13:05:19.498284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 13:05:19.508641       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:05:19.508694       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 13:05:22.700445       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 13:09:42.285905       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1] <==
	I0520 13:11:25.077827       1 serving.go:380] Generated self-signed cert in-memory
	W0520 13:11:26.876721       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 13:11:26.876948       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 13:11:26.876981       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 13:11:26.877062       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 13:11:26.905245       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 13:11:26.905583       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:11:26.907832       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 13:11:26.908193       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 13:11:26.908312       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 13:11:26.908465       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 13:11:27.009472       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 13:11:24 multinode-865571 kubelet[3079]: I0520 13:11:24.635807    3079 kubelet_node_status.go:73] "Attempting to register node" node="multinode-865571"
	May 20 13:11:27 multinode-865571 kubelet[3079]: I0520 13:11:27.074232    3079 kubelet_node_status.go:112] "Node was previously registered" node="multinode-865571"
	May 20 13:11:27 multinode-865571 kubelet[3079]: I0520 13:11:27.074676    3079 kubelet_node_status.go:76] "Successfully registered node" node="multinode-865571"
	May 20 13:11:27 multinode-865571 kubelet[3079]: I0520 13:11:27.076309    3079 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 13:11:27 multinode-865571 kubelet[3079]: I0520 13:11:27.077584    3079 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 13:11:27 multinode-865571 kubelet[3079]: E0520 13:11:27.092029    3079 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"multinode-865571\" not found"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.113499    3079 apiserver.go:52] "Watching apiserver"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.116657    3079 topology_manager.go:215] "Topology Admit Handler" podUID="2bdfbdfb-82cd-402d-9ec5-42adc84fa06c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cck8j"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.116800    3079 topology_manager.go:215] "Topology Admit Handler" podUID="a05815a1-89f4-4adf-88f3-d85b1c969cd6" podNamespace="kube-system" podName="kindnet-p69ft"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.117985    3079 topology_manager.go:215] "Topology Admit Handler" podUID="826e8825-487e-4a9e-8a18-21245055c769" podNamespace="kube-system" podName="kube-proxy-z8dbs"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.118176    3079 topology_manager.go:215] "Topology Admit Handler" podUID="b9037bf4-865b-4ef6-8138-1a3c6a8d1500" podNamespace="kube-system" podName="storage-provisioner"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.118290    3079 topology_manager.go:215] "Topology Admit Handler" podUID="55131023-9fdc-4c5b-86f3-0963e13b54c2" podNamespace="default" podName="busybox-fc5497c4f-c8hj2"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.126917    3079 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.210791    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a05815a1-89f4-4adf-88f3-d85b1c969cd6-cni-cfg\") pod \"kindnet-p69ft\" (UID: \"a05815a1-89f4-4adf-88f3-d85b1c969cd6\") " pod="kube-system/kindnet-p69ft"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.210912    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a05815a1-89f4-4adf-88f3-d85b1c969cd6-xtables-lock\") pod \"kindnet-p69ft\" (UID: \"a05815a1-89f4-4adf-88f3-d85b1c969cd6\") " pod="kube-system/kindnet-p69ft"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.211074    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/826e8825-487e-4a9e-8a18-21245055c769-lib-modules\") pod \"kube-proxy-z8dbs\" (UID: \"826e8825-487e-4a9e-8a18-21245055c769\") " pod="kube-system/kube-proxy-z8dbs"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.211778    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a05815a1-89f4-4adf-88f3-d85b1c969cd6-lib-modules\") pod \"kindnet-p69ft\" (UID: \"a05815a1-89f4-4adf-88f3-d85b1c969cd6\") " pod="kube-system/kindnet-p69ft"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.211832    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/826e8825-487e-4a9e-8a18-21245055c769-xtables-lock\") pod \"kube-proxy-z8dbs\" (UID: \"826e8825-487e-4a9e-8a18-21245055c769\") " pod="kube-system/kube-proxy-z8dbs"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.211850    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b9037bf4-865b-4ef6-8138-1a3c6a8d1500-tmp\") pod \"storage-provisioner\" (UID: \"b9037bf4-865b-4ef6-8138-1a3c6a8d1500\") " pod="kube-system/storage-provisioner"
	May 20 13:11:31 multinode-865571 kubelet[3079]: I0520 13:11:31.029214    3079 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 20 13:12:23 multinode-865571 kubelet[3079]: E0520 13:12:23.212174    3079 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:12:23 multinode-865571 kubelet[3079]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:12:23 multinode-865571 kubelet[3079]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:12:23 multinode-865571 kubelet[3079]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:12:23 multinode-865571 kubelet[3079]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 13:12:42.768658  893623 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18932-852915/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-865571 -n multinode-865571
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-865571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (305.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 stop
E0520 13:14:13.565255  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-865571 stop: exit status 82 (2m0.473284572s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-865571-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-865571 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-865571 status: exit status 3 (18.714314103s)

                                                
                                                
-- stdout --
	multinode-865571
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-865571-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 13:15:06.007253  894314 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0520 13:15:06.007289  894314 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-865571 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-865571 -n multinode-865571
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-865571 logs -n 25: (1.482127161s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m02:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571:/home/docker/cp-test_multinode-865571-m02_multinode-865571.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571 sudo cat                                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m02_multinode-865571.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m02:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03:/home/docker/cp-test_multinode-865571-m02_multinode-865571-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571-m03 sudo cat                                   | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m02_multinode-865571-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp testdata/cp-test.txt                                                | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile540683293/001/cp-test_multinode-865571-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571:/home/docker/cp-test_multinode-865571-m03_multinode-865571.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571 sudo cat                                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m03_multinode-865571.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt                       | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02:/home/docker/cp-test_multinode-865571-m03_multinode-865571-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571-m02 sudo cat                                   | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m03_multinode-865571-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-865571 node stop m03                                                          | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	| node    | multinode-865571 node start                                                             | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-865571                                                                | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	| stop    | -p multinode-865571                                                                     | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	| start   | -p multinode-865571                                                                     | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:09 UTC | 20 May 24 13:12 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-865571                                                                | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:12 UTC |                     |
	| node    | multinode-865571 node delete                                                            | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:12 UTC | 20 May 24 13:12 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-865571 stop                                                                   | multinode-865571 | jenkins | v1.33.1 | 20 May 24 13:12 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:09:41
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:09:41.401627  892584 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:09:41.401744  892584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:09:41.401755  892584 out.go:304] Setting ErrFile to fd 2...
	I0520 13:09:41.401761  892584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:09:41.401972  892584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:09:41.402513  892584 out.go:298] Setting JSON to false
	I0520 13:09:41.403544  892584 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10329,"bootTime":1716200252,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:09:41.403600  892584 start.go:139] virtualization: kvm guest
	I0520 13:09:41.405890  892584 out.go:177] * [multinode-865571] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:09:41.407691  892584 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 13:09:41.409038  892584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:09:41.407689  892584 notify.go:220] Checking for updates...
	I0520 13:09:41.411265  892584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:09:41.412518  892584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:09:41.413734  892584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:09:41.414959  892584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:09:41.416536  892584 config.go:182] Loaded profile config "multinode-865571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:09:41.416680  892584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:09:41.417169  892584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:09:41.417224  892584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:09:41.438382  892584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0520 13:09:41.438825  892584 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:09:41.439419  892584 main.go:141] libmachine: Using API Version  1
	I0520 13:09:41.439440  892584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:09:41.439948  892584 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:09:41.440192  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:09:41.475614  892584 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:09:41.476872  892584 start.go:297] selected driver: kvm2
	I0520 13:09:41.476881  892584 start.go:901] validating driver "kvm2" against &{Name:multinode-865571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-865571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.160 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:09:41.477028  892584 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:09:41.477331  892584 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:09:41.477390  892584 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:09:41.492014  892584 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:09:41.492670  892584 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:09:41.492743  892584 cni.go:84] Creating CNI manager for ""
	I0520 13:09:41.492756  892584 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 13:09:41.492814  892584 start.go:340] cluster config:
	{Name:multinode-865571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-865571 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.160 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:09:41.492934  892584 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:09:41.494792  892584 out.go:177] * Starting "multinode-865571" primary control-plane node in "multinode-865571" cluster
	I0520 13:09:41.496090  892584 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:09:41.496118  892584 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:09:41.496128  892584 cache.go:56] Caching tarball of preloaded images
	I0520 13:09:41.496205  892584 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:09:41.496216  892584 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:09:41.496329  892584 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/config.json ...
	I0520 13:09:41.496504  892584 start.go:360] acquireMachinesLock for multinode-865571: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:09:41.496541  892584 start.go:364] duration metric: took 20.303µs to acquireMachinesLock for "multinode-865571"
	I0520 13:09:41.496553  892584 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:09:41.496561  892584 fix.go:54] fixHost starting: 
	I0520 13:09:41.496843  892584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:09:41.496877  892584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:09:41.510550  892584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0520 13:09:41.511048  892584 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:09:41.511523  892584 main.go:141] libmachine: Using API Version  1
	I0520 13:09:41.511545  892584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:09:41.511814  892584 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:09:41.512008  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:09:41.512136  892584 main.go:141] libmachine: (multinode-865571) Calling .GetState
	I0520 13:09:41.513744  892584 fix.go:112] recreateIfNeeded on multinode-865571: state=Running err=<nil>
	W0520 13:09:41.513764  892584 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:09:41.516559  892584 out.go:177] * Updating the running kvm2 "multinode-865571" VM ...
	I0520 13:09:41.518042  892584 machine.go:94] provisionDockerMachine start ...
	I0520 13:09:41.518066  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:09:41.518259  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.520775  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.521279  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.521309  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.521430  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:41.521593  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.521781  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.521955  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:41.522131  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:09:41.522339  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:09:41.522351  892584 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:09:41.640152  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-865571
	
	I0520 13:09:41.640182  892584 main.go:141] libmachine: (multinode-865571) Calling .GetMachineName
	I0520 13:09:41.640448  892584 buildroot.go:166] provisioning hostname "multinode-865571"
	I0520 13:09:41.640481  892584 main.go:141] libmachine: (multinode-865571) Calling .GetMachineName
	I0520 13:09:41.640673  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.643431  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.643791  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.643829  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.644010  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:41.644209  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.644384  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.644524  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:41.644680  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:09:41.644856  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:09:41.644869  892584 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-865571 && echo "multinode-865571" | sudo tee /etc/hostname
	I0520 13:09:41.775557  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-865571
	
	I0520 13:09:41.775580  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.778395  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.778785  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.778832  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.779056  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:41.779261  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.779441  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.779608  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:41.779775  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:09:41.779968  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:09:41.779984  892584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-865571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-865571/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-865571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:09:41.896561  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:09:41.896598  892584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 13:09:41.896658  892584 buildroot.go:174] setting up certificates
	I0520 13:09:41.896673  892584 provision.go:84] configureAuth start
	I0520 13:09:41.896694  892584 main.go:141] libmachine: (multinode-865571) Calling .GetMachineName
	I0520 13:09:41.897018  892584 main.go:141] libmachine: (multinode-865571) Calling .GetIP
	I0520 13:09:41.899498  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.899840  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.899861  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.900016  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.902091  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.902451  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.902483  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.902592  892584 provision.go:143] copyHostCerts
	I0520 13:09:41.902621  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 13:09:41.902668  892584 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 13:09:41.902688  892584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 13:09:41.902769  892584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 13:09:41.902910  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 13:09:41.902936  892584 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 13:09:41.902943  892584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 13:09:41.902984  892584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 13:09:41.903064  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 13:09:41.903087  892584 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 13:09:41.903094  892584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 13:09:41.903131  892584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 13:09:41.903212  892584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.multinode-865571 san=[127.0.0.1 192.168.39.78 localhost minikube multinode-865571]
	I0520 13:09:41.981621  892584 provision.go:177] copyRemoteCerts
	I0520 13:09:41.981693  892584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:09:41.981734  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:41.984360  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.984677  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:41.984707  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:41.984870  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:41.985079  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:41.985270  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:41.985401  892584 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:09:42.074874  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 13:09:42.074972  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 13:09:42.101153  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 13:09:42.101211  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 13:09:42.125689  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 13:09:42.125743  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 13:09:42.150595  892584 provision.go:87] duration metric: took 253.901402ms to configureAuth
	I0520 13:09:42.150630  892584 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:09:42.150912  892584 config.go:182] Loaded profile config "multinode-865571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:09:42.151005  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:09:42.153650  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:42.154008  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:09:42.154036  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:09:42.154167  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:09:42.154354  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:42.154484  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:09:42.154595  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:09:42.154704  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:09:42.154924  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:09:42.154945  892584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:11:12.916665  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:11:12.916756  892584 machine.go:97] duration metric: took 1m31.39863487s to provisionDockerMachine
	I0520 13:11:12.916784  892584 start.go:293] postStartSetup for "multinode-865571" (driver="kvm2")
	I0520 13:11:12.916802  892584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:11:12.916841  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:12.917239  892584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:11:12.917279  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:11:12.920514  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:12.921031  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:12.921062  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:12.921230  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:11:12.921427  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:12.921598  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:11:12.921744  892584 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:11:13.011195  892584 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:11:13.015534  892584 command_runner.go:130] > NAME=Buildroot
	I0520 13:11:13.015556  892584 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 13:11:13.015561  892584 command_runner.go:130] > ID=buildroot
	I0520 13:11:13.015566  892584 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 13:11:13.015571  892584 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 13:11:13.015638  892584 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:11:13.015664  892584 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 13:11:13.015744  892584 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 13:11:13.015819  892584 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 13:11:13.015830  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /etc/ssl/certs/8603342.pem
	I0520 13:11:13.015906  892584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:11:13.025559  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:11:13.048846  892584 start.go:296] duration metric: took 132.045747ms for postStartSetup
	I0520 13:11:13.048885  892584 fix.go:56] duration metric: took 1m31.552324117s for fixHost
	I0520 13:11:13.048908  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:11:13.051506  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.051855  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:13.051880  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.052108  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:11:13.052325  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:13.052477  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:13.052610  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:11:13.052809  892584 main.go:141] libmachine: Using SSH client type: native
	I0520 13:11:13.053009  892584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0520 13:11:13.053021  892584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:11:13.167683  892584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716210673.149564513
	
	I0520 13:11:13.167708  892584 fix.go:216] guest clock: 1716210673.149564513
	I0520 13:11:13.167715  892584 fix.go:229] Guest: 2024-05-20 13:11:13.149564513 +0000 UTC Remote: 2024-05-20 13:11:13.048889216 +0000 UTC m=+91.683191693 (delta=100.675297ms)
	I0520 13:11:13.167736  892584 fix.go:200] guest clock delta is within tolerance: 100.675297ms
	I0520 13:11:13.167742  892584 start.go:83] releasing machines lock for "multinode-865571", held for 1m31.671192938s
	I0520 13:11:13.167762  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:13.168046  892584 main.go:141] libmachine: (multinode-865571) Calling .GetIP
	I0520 13:11:13.170614  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.170974  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:13.171017  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.171204  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:13.171689  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:13.171886  892584 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:11:13.172006  892584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:11:13.172046  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:11:13.172149  892584 ssh_runner.go:195] Run: cat /version.json
	I0520 13:11:13.172177  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:11:13.174409  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.174622  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.174769  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:13.174800  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.174930  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:11:13.175053  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:13.175079  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:13.175125  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:13.175239  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:11:13.175306  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:11:13.175390  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:11:13.175406  892584 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:11:13.175510  892584 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:11:13.175642  892584 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:11:13.255378  892584 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	W0520 13:11:13.255551  892584 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:11:13.255633  892584 ssh_runner.go:195] Run: systemctl --version
	I0520 13:11:13.279927  892584 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 13:11:13.279988  892584 command_runner.go:130] > systemd 252 (252)
	I0520 13:11:13.280021  892584 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 13:11:13.280159  892584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:11:13.436928  892584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 13:11:13.443022  892584 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 13:11:13.443188  892584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:11:13.443256  892584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:11:13.452395  892584 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 13:11:13.452413  892584 start.go:494] detecting cgroup driver to use...
	I0520 13:11:13.452471  892584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:11:13.468182  892584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:11:13.481794  892584 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:11:13.481852  892584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:11:13.494673  892584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:11:13.507653  892584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:11:13.644228  892584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:11:13.780453  892584 docker.go:233] disabling docker service ...
	I0520 13:11:13.780534  892584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:11:13.797929  892584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:11:13.813275  892584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:11:13.951120  892584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:11:14.092338  892584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:11:14.107709  892584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:11:14.126780  892584 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0520 13:11:14.126858  892584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:11:14.126921  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.137886  892584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:11:14.137959  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.148728  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.159271  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.170000  892584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:11:14.181237  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.192155  892584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.202813  892584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:11:14.214831  892584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:11:14.227151  892584 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 13:11:14.227220  892584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:11:14.236659  892584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:11:14.369098  892584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:11:20.897191  892584 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.528050351s)
	I0520 13:11:20.897222  892584 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:11:20.897272  892584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:11:20.902136  892584 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0520 13:11:20.902165  892584 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 13:11:20.902174  892584 command_runner.go:130] > Device: 0,22	Inode: 1325        Links: 1
	I0520 13:11:20.902180  892584 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 13:11:20.902186  892584 command_runner.go:130] > Access: 2024-05-20 13:11:20.775317375 +0000
	I0520 13:11:20.902192  892584 command_runner.go:130] > Modify: 2024-05-20 13:11:20.775317375 +0000
	I0520 13:11:20.902197  892584 command_runner.go:130] > Change: 2024-05-20 13:11:20.775317375 +0000
	I0520 13:11:20.902201  892584 command_runner.go:130] >  Birth: -
	I0520 13:11:20.902239  892584 start.go:562] Will wait 60s for crictl version
	I0520 13:11:20.902277  892584 ssh_runner.go:195] Run: which crictl
	I0520 13:11:20.905990  892584 command_runner.go:130] > /usr/bin/crictl
	I0520 13:11:20.906055  892584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:11:20.941487  892584 command_runner.go:130] > Version:  0.1.0
	I0520 13:11:20.941507  892584 command_runner.go:130] > RuntimeName:  cri-o
	I0520 13:11:20.941511  892584 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0520 13:11:20.941516  892584 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 13:11:20.942569  892584 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:11:20.942633  892584 ssh_runner.go:195] Run: crio --version
	I0520 13:11:20.970140  892584 command_runner.go:130] > crio version 1.29.1
	I0520 13:11:20.970164  892584 command_runner.go:130] > Version:        1.29.1
	I0520 13:11:20.970169  892584 command_runner.go:130] > GitCommit:      unknown
	I0520 13:11:20.970174  892584 command_runner.go:130] > GitCommitDate:  unknown
	I0520 13:11:20.970178  892584 command_runner.go:130] > GitTreeState:   clean
	I0520 13:11:20.970184  892584 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 13:11:20.970189  892584 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 13:11:20.970192  892584 command_runner.go:130] > Compiler:       gc
	I0520 13:11:20.970197  892584 command_runner.go:130] > Platform:       linux/amd64
	I0520 13:11:20.970207  892584 command_runner.go:130] > Linkmode:       dynamic
	I0520 13:11:20.970213  892584 command_runner.go:130] > BuildTags:      
	I0520 13:11:20.970220  892584 command_runner.go:130] >   containers_image_ostree_stub
	I0520 13:11:20.970231  892584 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 13:11:20.970238  892584 command_runner.go:130] >   btrfs_noversion
	I0520 13:11:20.970248  892584 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 13:11:20.970258  892584 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 13:11:20.970265  892584 command_runner.go:130] >   seccomp
	I0520 13:11:20.970269  892584 command_runner.go:130] > LDFlags:          unknown
	I0520 13:11:20.970276  892584 command_runner.go:130] > SeccompEnabled:   true
	I0520 13:11:20.970280  892584 command_runner.go:130] > AppArmorEnabled:  false
	I0520 13:11:20.971407  892584 ssh_runner.go:195] Run: crio --version
	I0520 13:11:20.999282  892584 command_runner.go:130] > crio version 1.29.1
	I0520 13:11:20.999310  892584 command_runner.go:130] > Version:        1.29.1
	I0520 13:11:20.999319  892584 command_runner.go:130] > GitCommit:      unknown
	I0520 13:11:20.999325  892584 command_runner.go:130] > GitCommitDate:  unknown
	I0520 13:11:20.999331  892584 command_runner.go:130] > GitTreeState:   clean
	I0520 13:11:20.999338  892584 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 13:11:20.999344  892584 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 13:11:20.999350  892584 command_runner.go:130] > Compiler:       gc
	I0520 13:11:20.999358  892584 command_runner.go:130] > Platform:       linux/amd64
	I0520 13:11:20.999365  892584 command_runner.go:130] > Linkmode:       dynamic
	I0520 13:11:20.999374  892584 command_runner.go:130] > BuildTags:      
	I0520 13:11:20.999385  892584 command_runner.go:130] >   containers_image_ostree_stub
	I0520 13:11:20.999393  892584 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 13:11:20.999402  892584 command_runner.go:130] >   btrfs_noversion
	I0520 13:11:20.999410  892584 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 13:11:20.999421  892584 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 13:11:20.999431  892584 command_runner.go:130] >   seccomp
	I0520 13:11:20.999439  892584 command_runner.go:130] > LDFlags:          unknown
	I0520 13:11:20.999458  892584 command_runner.go:130] > SeccompEnabled:   true
	I0520 13:11:20.999468  892584 command_runner.go:130] > AppArmorEnabled:  false
	I0520 13:11:21.002386  892584 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:11:21.003758  892584 main.go:141] libmachine: (multinode-865571) Calling .GetIP
	I0520 13:11:21.006648  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:21.007087  892584 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:11:21.007119  892584 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:11:21.007332  892584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:11:21.011630  892584 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0520 13:11:21.011730  892584 kubeadm.go:877] updating cluster {Name:multinode-865571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-865571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.160 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:11:21.011874  892584 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:11:21.011919  892584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:11:21.059854  892584 command_runner.go:130] > {
	I0520 13:11:21.059877  892584 command_runner.go:130] >   "images": [
	I0520 13:11:21.059881  892584 command_runner.go:130] >     {
	I0520 13:11:21.059890  892584 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 13:11:21.059894  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.059901  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 13:11:21.059904  892584 command_runner.go:130] >       ],
	I0520 13:11:21.059914  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.059927  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 13:11:21.059939  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 13:11:21.059949  892584 command_runner.go:130] >       ],
	I0520 13:11:21.059956  892584 command_runner.go:130] >       "size": "65291810",
	I0520 13:11:21.059962  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.059970  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.059984  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.059991  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.059995  892584 command_runner.go:130] >     },
	I0520 13:11:21.059999  892584 command_runner.go:130] >     {
	I0520 13:11:21.060005  892584 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 13:11:21.060009  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060014  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 13:11:21.060018  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060023  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060035  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 13:11:21.060050  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 13:11:21.060057  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060067  892584 command_runner.go:130] >       "size": "1363676",
	I0520 13:11:21.060074  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.060088  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060095  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060099  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060102  892584 command_runner.go:130] >     },
	I0520 13:11:21.060106  892584 command_runner.go:130] >     {
	I0520 13:11:21.060111  892584 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 13:11:21.060116  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060121  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 13:11:21.060126  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060131  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060144  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 13:11:21.060160  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 13:11:21.060169  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060176  892584 command_runner.go:130] >       "size": "31470524",
	I0520 13:11:21.060186  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.060192  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060199  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060203  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060209  892584 command_runner.go:130] >     },
	I0520 13:11:21.060212  892584 command_runner.go:130] >     {
	I0520 13:11:21.060218  892584 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 13:11:21.060224  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060229  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 13:11:21.060235  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060242  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060258  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 13:11:21.060280  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 13:11:21.060290  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060296  892584 command_runner.go:130] >       "size": "61245718",
	I0520 13:11:21.060302  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.060309  892584 command_runner.go:130] >       "username": "nonroot",
	I0520 13:11:21.060315  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060319  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060323  892584 command_runner.go:130] >     },
	I0520 13:11:21.060326  892584 command_runner.go:130] >     {
	I0520 13:11:21.060335  892584 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 13:11:21.060355  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060367  892584 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 13:11:21.060376  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060382  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060396  892584 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 13:11:21.060410  892584 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 13:11:21.060417  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060421  892584 command_runner.go:130] >       "size": "150779692",
	I0520 13:11:21.060428  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.060434  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.060443  892584 command_runner.go:130] >       },
	I0520 13:11:21.060449  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060459  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060474  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060479  892584 command_runner.go:130] >     },
	I0520 13:11:21.060487  892584 command_runner.go:130] >     {
	I0520 13:11:21.060497  892584 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 13:11:21.060506  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060513  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 13:11:21.060522  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060529  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060548  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 13:11:21.060563  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 13:11:21.060572  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060582  892584 command_runner.go:130] >       "size": "117601759",
	I0520 13:11:21.060593  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.060600  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.060606  892584 command_runner.go:130] >       },
	I0520 13:11:21.060610  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060618  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060626  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060633  892584 command_runner.go:130] >     },
	I0520 13:11:21.060641  892584 command_runner.go:130] >     {
	I0520 13:11:21.060651  892584 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 13:11:21.060660  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060671  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 13:11:21.060687  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060694  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060726  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 13:11:21.060743  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 13:11:21.060750  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060759  892584 command_runner.go:130] >       "size": "112170310",
	I0520 13:11:21.060765  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.060773  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.060780  892584 command_runner.go:130] >       },
	I0520 13:11:21.060787  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060797  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060804  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060812  892584 command_runner.go:130] >     },
	I0520 13:11:21.060815  892584 command_runner.go:130] >     {
	I0520 13:11:21.060824  892584 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 13:11:21.060834  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060846  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 13:11:21.060851  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060861  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060888  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 13:11:21.060902  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 13:11:21.060908  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060913  892584 command_runner.go:130] >       "size": "85933465",
	I0520 13:11:21.060917  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.060920  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.060926  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.060932  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.060938  892584 command_runner.go:130] >     },
	I0520 13:11:21.060944  892584 command_runner.go:130] >     {
	I0520 13:11:21.060952  892584 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 13:11:21.060959  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.060967  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 13:11:21.060972  892584 command_runner.go:130] >       ],
	I0520 13:11:21.060978  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.060993  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 13:11:21.061002  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 13:11:21.061012  892584 command_runner.go:130] >       ],
	I0520 13:11:21.061019  892584 command_runner.go:130] >       "size": "63026504",
	I0520 13:11:21.061026  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.061036  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.061042  892584 command_runner.go:130] >       },
	I0520 13:11:21.061051  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.061057  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.061066  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.061072  892584 command_runner.go:130] >     },
	I0520 13:11:21.061080  892584 command_runner.go:130] >     {
	I0520 13:11:21.061087  892584 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 13:11:21.061093  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.061101  892584 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 13:11:21.061110  892584 command_runner.go:130] >       ],
	I0520 13:11:21.061117  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.061130  892584 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 13:11:21.061144  892584 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 13:11:21.061152  892584 command_runner.go:130] >       ],
	I0520 13:11:21.061159  892584 command_runner.go:130] >       "size": "750414",
	I0520 13:11:21.061169  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.061175  892584 command_runner.go:130] >         "value": "65535"
	I0520 13:11:21.061180  892584 command_runner.go:130] >       },
	I0520 13:11:21.061185  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.061192  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.061199  892584 command_runner.go:130] >       "pinned": true
	I0520 13:11:21.061207  892584 command_runner.go:130] >     }
	I0520 13:11:21.061212  892584 command_runner.go:130] >   ]
	I0520 13:11:21.061217  892584 command_runner.go:130] > }
	I0520 13:11:21.061435  892584 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:11:21.061448  892584 crio.go:433] Images already preloaded, skipping extraction
	I0520 13:11:21.061507  892584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:11:21.093098  892584 command_runner.go:130] > {
	I0520 13:11:21.093128  892584 command_runner.go:130] >   "images": [
	I0520 13:11:21.093135  892584 command_runner.go:130] >     {
	I0520 13:11:21.093143  892584 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 13:11:21.093157  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093166  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 13:11:21.093174  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093180  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093195  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 13:11:21.093210  892584 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 13:11:21.093215  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093223  892584 command_runner.go:130] >       "size": "65291810",
	I0520 13:11:21.093227  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.093231  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093248  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093255  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093258  892584 command_runner.go:130] >     },
	I0520 13:11:21.093262  892584 command_runner.go:130] >     {
	I0520 13:11:21.093268  892584 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 13:11:21.093276  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093288  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 13:11:21.093296  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093303  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093317  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 13:11:21.093328  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 13:11:21.093335  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093339  892584 command_runner.go:130] >       "size": "1363676",
	I0520 13:11:21.093345  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.093352  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093358  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093362  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093367  892584 command_runner.go:130] >     },
	I0520 13:11:21.093371  892584 command_runner.go:130] >     {
	I0520 13:11:21.093382  892584 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 13:11:21.093393  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093402  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 13:11:21.093414  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093423  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093434  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 13:11:21.093444  892584 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 13:11:21.093449  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093454  892584 command_runner.go:130] >       "size": "31470524",
	I0520 13:11:21.093460  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.093464  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093472  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093483  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093491  892584 command_runner.go:130] >     },
	I0520 13:11:21.093497  892584 command_runner.go:130] >     {
	I0520 13:11:21.093511  892584 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 13:11:21.093520  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093532  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 13:11:21.093540  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093547  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093557  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 13:11:21.093569  892584 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 13:11:21.093578  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093588  892584 command_runner.go:130] >       "size": "61245718",
	I0520 13:11:21.093595  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.093605  892584 command_runner.go:130] >       "username": "nonroot",
	I0520 13:11:21.093618  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093627  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093636  892584 command_runner.go:130] >     },
	I0520 13:11:21.093644  892584 command_runner.go:130] >     {
	I0520 13:11:21.093656  892584 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 13:11:21.093663  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093671  892584 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 13:11:21.093680  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093690  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093711  892584 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 13:11:21.093725  892584 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 13:11:21.093734  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093742  892584 command_runner.go:130] >       "size": "150779692",
	I0520 13:11:21.093748  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.093758  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.093766  892584 command_runner.go:130] >       },
	I0520 13:11:21.093774  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093784  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093792  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093801  892584 command_runner.go:130] >     },
	I0520 13:11:21.093808  892584 command_runner.go:130] >     {
	I0520 13:11:21.093821  892584 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 13:11:21.093830  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093838  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 13:11:21.093844  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093853  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.093871  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 13:11:21.093886  892584 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 13:11:21.093894  892584 command_runner.go:130] >       ],
	I0520 13:11:21.093904  892584 command_runner.go:130] >       "size": "117601759",
	I0520 13:11:21.093913  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.093923  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.093930  892584 command_runner.go:130] >       },
	I0520 13:11:21.093934  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.093940  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.093947  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.093955  892584 command_runner.go:130] >     },
	I0520 13:11:21.093964  892584 command_runner.go:130] >     {
	I0520 13:11:21.093977  892584 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 13:11:21.093987  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.093999  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 13:11:21.094008  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094017  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.094049  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 13:11:21.094066  892584 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 13:11:21.094085  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094095  892584 command_runner.go:130] >       "size": "112170310",
	I0520 13:11:21.094104  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.094113  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.094123  892584 command_runner.go:130] >       },
	I0520 13:11:21.094131  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.094135  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.094143  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.094152  892584 command_runner.go:130] >     },
	I0520 13:11:21.094158  892584 command_runner.go:130] >     {
	I0520 13:11:21.094171  892584 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 13:11:21.094180  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.094188  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 13:11:21.094196  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094203  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.094223  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 13:11:21.094238  892584 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 13:11:21.094247  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094254  892584 command_runner.go:130] >       "size": "85933465",
	I0520 13:11:21.094264  892584 command_runner.go:130] >       "uid": null,
	I0520 13:11:21.094271  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.094281  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.094287  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.094296  892584 command_runner.go:130] >     },
	I0520 13:11:21.094302  892584 command_runner.go:130] >     {
	I0520 13:11:21.094314  892584 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 13:11:21.094320  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.094326  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 13:11:21.094334  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094342  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.094357  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 13:11:21.094371  892584 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 13:11:21.094379  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094385  892584 command_runner.go:130] >       "size": "63026504",
	I0520 13:11:21.094395  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.094402  892584 command_runner.go:130] >         "value": "0"
	I0520 13:11:21.094410  892584 command_runner.go:130] >       },
	I0520 13:11:21.094416  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.094426  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.094434  892584 command_runner.go:130] >       "pinned": false
	I0520 13:11:21.094444  892584 command_runner.go:130] >     },
	I0520 13:11:21.094452  892584 command_runner.go:130] >     {
	I0520 13:11:21.094462  892584 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 13:11:21.094471  892584 command_runner.go:130] >       "repoTags": [
	I0520 13:11:21.094481  892584 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 13:11:21.094490  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094497  892584 command_runner.go:130] >       "repoDigests": [
	I0520 13:11:21.094506  892584 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 13:11:21.094525  892584 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 13:11:21.094537  892584 command_runner.go:130] >       ],
	I0520 13:11:21.094544  892584 command_runner.go:130] >       "size": "750414",
	I0520 13:11:21.094554  892584 command_runner.go:130] >       "uid": {
	I0520 13:11:21.094563  892584 command_runner.go:130] >         "value": "65535"
	I0520 13:11:21.094572  892584 command_runner.go:130] >       },
	I0520 13:11:21.094582  892584 command_runner.go:130] >       "username": "",
	I0520 13:11:21.094590  892584 command_runner.go:130] >       "spec": null,
	I0520 13:11:21.094599  892584 command_runner.go:130] >       "pinned": true
	I0520 13:11:21.094606  892584 command_runner.go:130] >     }
	I0520 13:11:21.094609  892584 command_runner.go:130] >   ]
	I0520 13:11:21.094613  892584 command_runner.go:130] > }
	I0520 13:11:21.094792  892584 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:11:21.094807  892584 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:11:21.094817  892584 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.30.1 crio true true} ...
	I0520 13:11:21.094976  892584 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-865571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-865571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:11:21.095060  892584 ssh_runner.go:195] Run: crio config
	I0520 13:11:21.140917  892584 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0520 13:11:21.140949  892584 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0520 13:11:21.140959  892584 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0520 13:11:21.140963  892584 command_runner.go:130] > #
	I0520 13:11:21.141002  892584 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0520 13:11:21.141015  892584 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0520 13:11:21.141021  892584 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0520 13:11:21.141032  892584 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0520 13:11:21.141041  892584 command_runner.go:130] > # reload'.
	I0520 13:11:21.141050  892584 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0520 13:11:21.141061  892584 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0520 13:11:21.141073  892584 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0520 13:11:21.141086  892584 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0520 13:11:21.141092  892584 command_runner.go:130] > [crio]
	I0520 13:11:21.141102  892584 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0520 13:11:21.141113  892584 command_runner.go:130] > # containers images, in this directory.
	I0520 13:11:21.141124  892584 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0520 13:11:21.141137  892584 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0520 13:11:21.141155  892584 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0520 13:11:21.141176  892584 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0520 13:11:21.141187  892584 command_runner.go:130] > # imagestore = ""
	I0520 13:11:21.141196  892584 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0520 13:11:21.141206  892584 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0520 13:11:21.141217  892584 command_runner.go:130] > storage_driver = "overlay"
	I0520 13:11:21.141230  892584 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0520 13:11:21.141242  892584 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0520 13:11:21.141250  892584 command_runner.go:130] > storage_option = [
	I0520 13:11:21.141259  892584 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0520 13:11:21.141267  892584 command_runner.go:130] > ]
	I0520 13:11:21.141281  892584 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0520 13:11:21.141292  892584 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0520 13:11:21.141302  892584 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0520 13:11:21.141312  892584 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0520 13:11:21.141325  892584 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0520 13:11:21.141335  892584 command_runner.go:130] > # always happen on a node reboot
	I0520 13:11:21.141346  892584 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0520 13:11:21.141368  892584 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0520 13:11:21.141380  892584 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0520 13:11:21.141390  892584 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0520 13:11:21.141397  892584 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0520 13:11:21.141410  892584 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0520 13:11:21.141426  892584 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0520 13:11:21.141436  892584 command_runner.go:130] > # internal_wipe = true
	I0520 13:11:21.141448  892584 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0520 13:11:21.141460  892584 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0520 13:11:21.141469  892584 command_runner.go:130] > # internal_repair = false
	I0520 13:11:21.141484  892584 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0520 13:11:21.141497  892584 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0520 13:11:21.141506  892584 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0520 13:11:21.141517  892584 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0520 13:11:21.141527  892584 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0520 13:11:21.141535  892584 command_runner.go:130] > [crio.api]
	I0520 13:11:21.141544  892584 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0520 13:11:21.141559  892584 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0520 13:11:21.141580  892584 command_runner.go:130] > # IP address on which the stream server will listen.
	I0520 13:11:21.141590  892584 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0520 13:11:21.141601  892584 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0520 13:11:21.141611  892584 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0520 13:11:21.141619  892584 command_runner.go:130] > # stream_port = "0"
	I0520 13:11:21.141627  892584 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0520 13:11:21.141634  892584 command_runner.go:130] > # stream_enable_tls = false
	I0520 13:11:21.141642  892584 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0520 13:11:21.141653  892584 command_runner.go:130] > # stream_idle_timeout = ""
	I0520 13:11:21.141667  892584 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0520 13:11:21.141695  892584 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0520 13:11:21.141706  892584 command_runner.go:130] > # minutes.
	I0520 13:11:21.141712  892584 command_runner.go:130] > # stream_tls_cert = ""
	I0520 13:11:21.141737  892584 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0520 13:11:21.141751  892584 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0520 13:11:21.141760  892584 command_runner.go:130] > # stream_tls_key = ""
	I0520 13:11:21.141769  892584 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0520 13:11:21.141781  892584 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0520 13:11:21.141806  892584 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0520 13:11:21.141815  892584 command_runner.go:130] > # stream_tls_ca = ""
	I0520 13:11:21.141828  892584 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 13:11:21.141839  892584 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0520 13:11:21.141859  892584 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 13:11:21.141870  892584 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0520 13:11:21.141882  892584 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0520 13:11:21.141891  892584 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0520 13:11:21.141898  892584 command_runner.go:130] > [crio.runtime]
	I0520 13:11:21.141907  892584 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0520 13:11:21.141920  892584 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0520 13:11:21.141929  892584 command_runner.go:130] > # "nofile=1024:2048"
	I0520 13:11:21.141942  892584 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0520 13:11:21.141951  892584 command_runner.go:130] > # default_ulimits = [
	I0520 13:11:21.141957  892584 command_runner.go:130] > # ]
	I0520 13:11:21.141970  892584 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0520 13:11:21.141975  892584 command_runner.go:130] > # no_pivot = false
	I0520 13:11:21.141984  892584 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0520 13:11:21.141996  892584 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0520 13:11:21.142006  892584 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0520 13:11:21.142021  892584 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0520 13:11:21.142032  892584 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0520 13:11:21.142045  892584 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 13:11:21.142055  892584 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0520 13:11:21.142062  892584 command_runner.go:130] > # Cgroup setting for conmon
	I0520 13:11:21.142077  892584 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0520 13:11:21.142084  892584 command_runner.go:130] > conmon_cgroup = "pod"
	I0520 13:11:21.142094  892584 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0520 13:11:21.142105  892584 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0520 13:11:21.142118  892584 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 13:11:21.142127  892584 command_runner.go:130] > conmon_env = [
	I0520 13:11:21.142136  892584 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 13:11:21.142141  892584 command_runner.go:130] > ]
	I0520 13:11:21.142146  892584 command_runner.go:130] > # Additional environment variables to set for all the
	I0520 13:11:21.142153  892584 command_runner.go:130] > # containers. These are overridden if set in the
	I0520 13:11:21.142158  892584 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0520 13:11:21.142164  892584 command_runner.go:130] > # default_env = [
	I0520 13:11:21.142167  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142173  892584 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0520 13:11:21.142180  892584 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0520 13:11:21.142187  892584 command_runner.go:130] > # selinux = false
	I0520 13:11:21.142196  892584 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0520 13:11:21.142204  892584 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0520 13:11:21.142217  892584 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0520 13:11:21.142227  892584 command_runner.go:130] > # seccomp_profile = ""
	I0520 13:11:21.142239  892584 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0520 13:11:21.142250  892584 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0520 13:11:21.142262  892584 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0520 13:11:21.142272  892584 command_runner.go:130] > # which might increase security.
	I0520 13:11:21.142279  892584 command_runner.go:130] > # This option is currently deprecated,
	I0520 13:11:21.142289  892584 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0520 13:11:21.142296  892584 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0520 13:11:21.142308  892584 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0520 13:11:21.142322  892584 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0520 13:11:21.142335  892584 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0520 13:11:21.142348  892584 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0520 13:11:21.142359  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.142375  892584 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0520 13:11:21.142383  892584 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0520 13:11:21.142388  892584 command_runner.go:130] > # the cgroup blockio controller.
	I0520 13:11:21.142393  892584 command_runner.go:130] > # blockio_config_file = ""
	I0520 13:11:21.142401  892584 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0520 13:11:21.142410  892584 command_runner.go:130] > # blockio parameters.
	I0520 13:11:21.142416  892584 command_runner.go:130] > # blockio_reload = false
	I0520 13:11:21.142428  892584 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0520 13:11:21.142437  892584 command_runner.go:130] > # irqbalance daemon.
	I0520 13:11:21.142445  892584 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0520 13:11:21.142459  892584 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0520 13:11:21.142473  892584 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0520 13:11:21.142483  892584 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0520 13:11:21.142496  892584 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0520 13:11:21.142507  892584 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0520 13:11:21.142518  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.142528  892584 command_runner.go:130] > # rdt_config_file = ""
	I0520 13:11:21.142539  892584 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0520 13:11:21.142549  892584 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0520 13:11:21.142568  892584 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0520 13:11:21.142575  892584 command_runner.go:130] > # separate_pull_cgroup = ""
	I0520 13:11:21.142580  892584 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0520 13:11:21.142586  892584 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0520 13:11:21.142592  892584 command_runner.go:130] > # will be added.
	I0520 13:11:21.142596  892584 command_runner.go:130] > # default_capabilities = [
	I0520 13:11:21.142599  892584 command_runner.go:130] > # 	"CHOWN",
	I0520 13:11:21.142603  892584 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0520 13:11:21.142607  892584 command_runner.go:130] > # 	"FSETID",
	I0520 13:11:21.142611  892584 command_runner.go:130] > # 	"FOWNER",
	I0520 13:11:21.142615  892584 command_runner.go:130] > # 	"SETGID",
	I0520 13:11:21.142618  892584 command_runner.go:130] > # 	"SETUID",
	I0520 13:11:21.142622  892584 command_runner.go:130] > # 	"SETPCAP",
	I0520 13:11:21.142626  892584 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0520 13:11:21.142629  892584 command_runner.go:130] > # 	"KILL",
	I0520 13:11:21.142632  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142639  892584 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0520 13:11:21.142647  892584 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0520 13:11:21.142652  892584 command_runner.go:130] > # add_inheritable_capabilities = false
	I0520 13:11:21.142660  892584 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0520 13:11:21.142669  892584 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 13:11:21.142678  892584 command_runner.go:130] > default_sysctls = [
	I0520 13:11:21.142690  892584 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0520 13:11:21.142698  892584 command_runner.go:130] > ]
	I0520 13:11:21.142705  892584 command_runner.go:130] > # List of devices on the host that a
	I0520 13:11:21.142721  892584 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0520 13:11:21.142730  892584 command_runner.go:130] > # allowed_devices = [
	I0520 13:11:21.142736  892584 command_runner.go:130] > # 	"/dev/fuse",
	I0520 13:11:21.142745  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142753  892584 command_runner.go:130] > # List of additional devices. specified as
	I0520 13:11:21.142767  892584 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0520 13:11:21.142778  892584 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0520 13:11:21.142786  892584 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 13:11:21.142796  892584 command_runner.go:130] > # additional_devices = [
	I0520 13:11:21.142802  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142813  892584 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0520 13:11:21.142823  892584 command_runner.go:130] > # cdi_spec_dirs = [
	I0520 13:11:21.142830  892584 command_runner.go:130] > # 	"/etc/cdi",
	I0520 13:11:21.142838  892584 command_runner.go:130] > # 	"/var/run/cdi",
	I0520 13:11:21.142856  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142869  892584 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0520 13:11:21.142882  892584 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0520 13:11:21.142892  892584 command_runner.go:130] > # Defaults to false.
	I0520 13:11:21.142901  892584 command_runner.go:130] > # device_ownership_from_security_context = false
	I0520 13:11:21.142914  892584 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0520 13:11:21.142927  892584 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0520 13:11:21.142936  892584 command_runner.go:130] > # hooks_dir = [
	I0520 13:11:21.142944  892584 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0520 13:11:21.142953  892584 command_runner.go:130] > # ]
	I0520 13:11:21.142963  892584 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0520 13:11:21.142976  892584 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0520 13:11:21.142987  892584 command_runner.go:130] > # its default mounts from the following two files:
	I0520 13:11:21.142993  892584 command_runner.go:130] > #
	I0520 13:11:21.143005  892584 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0520 13:11:21.143018  892584 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0520 13:11:21.143029  892584 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0520 13:11:21.143037  892584 command_runner.go:130] > #
	I0520 13:11:21.143043  892584 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0520 13:11:21.143055  892584 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0520 13:11:21.143068  892584 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0520 13:11:21.143079  892584 command_runner.go:130] > #      only add mounts it finds in this file.
	I0520 13:11:21.143086  892584 command_runner.go:130] > #
	I0520 13:11:21.143094  892584 command_runner.go:130] > # default_mounts_file = ""
	I0520 13:11:21.143102  892584 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0520 13:11:21.143122  892584 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0520 13:11:21.143132  892584 command_runner.go:130] > pids_limit = 1024
	I0520 13:11:21.143141  892584 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0520 13:11:21.143154  892584 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0520 13:11:21.143165  892584 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0520 13:11:21.143182  892584 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0520 13:11:21.143194  892584 command_runner.go:130] > # log_size_max = -1
	I0520 13:11:21.143208  892584 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0520 13:11:21.143220  892584 command_runner.go:130] > # log_to_journald = false
	I0520 13:11:21.143233  892584 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0520 13:11:21.143244  892584 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0520 13:11:21.143256  892584 command_runner.go:130] > # Path to directory for container attach sockets.
	I0520 13:11:21.143266  892584 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0520 13:11:21.143277  892584 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0520 13:11:21.143286  892584 command_runner.go:130] > # bind_mount_prefix = ""
	I0520 13:11:21.143295  892584 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0520 13:11:21.143304  892584 command_runner.go:130] > # read_only = false
	I0520 13:11:21.143313  892584 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0520 13:11:21.143327  892584 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0520 13:11:21.143337  892584 command_runner.go:130] > # live configuration reload.
	I0520 13:11:21.143343  892584 command_runner.go:130] > # log_level = "info"
	I0520 13:11:21.143355  892584 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0520 13:11:21.143366  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.143372  892584 command_runner.go:130] > # log_filter = ""
	I0520 13:11:21.143379  892584 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0520 13:11:21.143388  892584 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0520 13:11:21.143391  892584 command_runner.go:130] > # separated by comma.
	I0520 13:11:21.143398  892584 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:11:21.143404  892584 command_runner.go:130] > # uid_mappings = ""
	I0520 13:11:21.143410  892584 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0520 13:11:21.143417  892584 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0520 13:11:21.143421  892584 command_runner.go:130] > # separated by comma.
	I0520 13:11:21.143430  892584 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:11:21.143436  892584 command_runner.go:130] > # gid_mappings = ""
	I0520 13:11:21.143447  892584 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0520 13:11:21.143459  892584 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 13:11:21.143475  892584 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 13:11:21.143491  892584 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:11:21.143501  892584 command_runner.go:130] > # minimum_mappable_uid = -1
	I0520 13:11:21.143518  892584 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0520 13:11:21.143531  892584 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 13:11:21.143543  892584 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 13:11:21.143558  892584 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 13:11:21.143568  892584 command_runner.go:130] > # minimum_mappable_gid = -1
	I0520 13:11:21.143581  892584 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0520 13:11:21.143594  892584 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0520 13:11:21.143606  892584 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0520 13:11:21.143616  892584 command_runner.go:130] > # ctr_stop_timeout = 30
	I0520 13:11:21.143627  892584 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0520 13:11:21.143638  892584 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0520 13:11:21.143646  892584 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0520 13:11:21.143652  892584 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0520 13:11:21.143661  892584 command_runner.go:130] > drop_infra_ctr = false
	I0520 13:11:21.143671  892584 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0520 13:11:21.143684  892584 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0520 13:11:21.143698  892584 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0520 13:11:21.143708  892584 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0520 13:11:21.143725  892584 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0520 13:11:21.143737  892584 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0520 13:11:21.143750  892584 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0520 13:11:21.143759  892584 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0520 13:11:21.143766  892584 command_runner.go:130] > # shared_cpuset = ""
	I0520 13:11:21.143778  892584 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0520 13:11:21.143789  892584 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0520 13:11:21.143799  892584 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0520 13:11:21.143809  892584 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0520 13:11:21.143819  892584 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0520 13:11:21.143825  892584 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0520 13:11:21.143833  892584 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0520 13:11:21.143837  892584 command_runner.go:130] > # enable_criu_support = false
	I0520 13:11:21.143847  892584 command_runner.go:130] > # Enable/disable the generation of the container,
	I0520 13:11:21.143860  892584 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0520 13:11:21.143871  892584 command_runner.go:130] > # enable_pod_events = false
	I0520 13:11:21.143883  892584 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 13:11:21.143896  892584 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 13:11:21.143910  892584 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0520 13:11:21.143919  892584 command_runner.go:130] > # default_runtime = "runc"
	I0520 13:11:21.143927  892584 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0520 13:11:21.143942  892584 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0520 13:11:21.143958  892584 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0520 13:11:21.143972  892584 command_runner.go:130] > # creation as a file is not desired either.
	I0520 13:11:21.143988  892584 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0520 13:11:21.143995  892584 command_runner.go:130] > # the hostname is being managed dynamically.
	I0520 13:11:21.144000  892584 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0520 13:11:21.144004  892584 command_runner.go:130] > # ]
	I0520 13:11:21.144010  892584 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0520 13:11:21.144017  892584 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0520 13:11:21.144023  892584 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0520 13:11:21.144030  892584 command_runner.go:130] > # Each entry in the table should follow the format:
	I0520 13:11:21.144034  892584 command_runner.go:130] > #
	I0520 13:11:21.144040  892584 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0520 13:11:21.144046  892584 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0520 13:11:21.144067  892584 command_runner.go:130] > # runtime_type = "oci"
	I0520 13:11:21.144074  892584 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0520 13:11:21.144079  892584 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0520 13:11:21.144084  892584 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0520 13:11:21.144089  892584 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0520 13:11:21.144095  892584 command_runner.go:130] > # monitor_env = []
	I0520 13:11:21.144099  892584 command_runner.go:130] > # privileged_without_host_devices = false
	I0520 13:11:21.144105  892584 command_runner.go:130] > # allowed_annotations = []
	I0520 13:11:21.144111  892584 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0520 13:11:21.144116  892584 command_runner.go:130] > # Where:
	I0520 13:11:21.144121  892584 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0520 13:11:21.144129  892584 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0520 13:11:21.144134  892584 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0520 13:11:21.144140  892584 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0520 13:11:21.144144  892584 command_runner.go:130] > #   in $PATH.
	I0520 13:11:21.144149  892584 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0520 13:11:21.144156  892584 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0520 13:11:21.144163  892584 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0520 13:11:21.144169  892584 command_runner.go:130] > #   state.
	I0520 13:11:21.144175  892584 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0520 13:11:21.144182  892584 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0520 13:11:21.144188  892584 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0520 13:11:21.144195  892584 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0520 13:11:21.144201  892584 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0520 13:11:21.144211  892584 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0520 13:11:21.144216  892584 command_runner.go:130] > #   The currently recognized values are:
	I0520 13:11:21.144222  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0520 13:11:21.144231  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0520 13:11:21.144236  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0520 13:11:21.144242  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0520 13:11:21.144250  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0520 13:11:21.144256  892584 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0520 13:11:21.144264  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0520 13:11:21.144270  892584 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0520 13:11:21.144278  892584 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0520 13:11:21.144284  892584 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0520 13:11:21.144290  892584 command_runner.go:130] > #   deprecated option "conmon".
	I0520 13:11:21.144296  892584 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0520 13:11:21.144303  892584 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0520 13:11:21.144308  892584 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0520 13:11:21.144315  892584 command_runner.go:130] > #   should be moved to the container's cgroup
	I0520 13:11:21.144323  892584 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0520 13:11:21.144330  892584 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0520 13:11:21.144336  892584 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0520 13:11:21.144341  892584 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0520 13:11:21.144346  892584 command_runner.go:130] > #
	I0520 13:11:21.144350  892584 command_runner.go:130] > # Using the seccomp notifier feature:
	I0520 13:11:21.144353  892584 command_runner.go:130] > #
	I0520 13:11:21.144360  892584 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0520 13:11:21.144368  892584 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0520 13:11:21.144371  892584 command_runner.go:130] > #
	I0520 13:11:21.144379  892584 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0520 13:11:21.144389  892584 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0520 13:11:21.144392  892584 command_runner.go:130] > #
	I0520 13:11:21.144398  892584 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0520 13:11:21.144404  892584 command_runner.go:130] > # feature.
	I0520 13:11:21.144408  892584 command_runner.go:130] > #
	I0520 13:11:21.144415  892584 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0520 13:11:21.144421  892584 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0520 13:11:21.144429  892584 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0520 13:11:21.144438  892584 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0520 13:11:21.144446  892584 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0520 13:11:21.144450  892584 command_runner.go:130] > #
	I0520 13:11:21.144458  892584 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0520 13:11:21.144464  892584 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0520 13:11:21.144469  892584 command_runner.go:130] > #
	I0520 13:11:21.144475  892584 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0520 13:11:21.144482  892584 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0520 13:11:21.144486  892584 command_runner.go:130] > #
	I0520 13:11:21.144493  892584 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0520 13:11:21.144499  892584 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0520 13:11:21.144505  892584 command_runner.go:130] > # limitation.
	I0520 13:11:21.144509  892584 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0520 13:11:21.144515  892584 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0520 13:11:21.144519  892584 command_runner.go:130] > runtime_type = "oci"
	I0520 13:11:21.144524  892584 command_runner.go:130] > runtime_root = "/run/runc"
	I0520 13:11:21.144528  892584 command_runner.go:130] > runtime_config_path = ""
	I0520 13:11:21.144532  892584 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0520 13:11:21.144538  892584 command_runner.go:130] > monitor_cgroup = "pod"
	I0520 13:11:21.144542  892584 command_runner.go:130] > monitor_exec_cgroup = ""
	I0520 13:11:21.144548  892584 command_runner.go:130] > monitor_env = [
	I0520 13:11:21.144553  892584 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 13:11:21.144556  892584 command_runner.go:130] > ]
	I0520 13:11:21.144560  892584 command_runner.go:130] > privileged_without_host_devices = false
	I0520 13:11:21.144566  892584 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0520 13:11:21.144573  892584 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0520 13:11:21.144579  892584 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0520 13:11:21.144588  892584 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0520 13:11:21.144595  892584 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0520 13:11:21.144602  892584 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0520 13:11:21.144614  892584 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0520 13:11:21.144623  892584 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0520 13:11:21.144629  892584 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0520 13:11:21.144635  892584 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0520 13:11:21.144641  892584 command_runner.go:130] > # Example:
	I0520 13:11:21.144645  892584 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0520 13:11:21.144654  892584 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0520 13:11:21.144661  892584 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0520 13:11:21.144666  892584 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0520 13:11:21.144670  892584 command_runner.go:130] > # cpuset = 0
	I0520 13:11:21.144674  892584 command_runner.go:130] > # cpushares = "0-1"
	I0520 13:11:21.144679  892584 command_runner.go:130] > # Where:
	I0520 13:11:21.144683  892584 command_runner.go:130] > # The workload name is workload-type.
	I0520 13:11:21.144689  892584 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0520 13:11:21.144698  892584 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0520 13:11:21.144704  892584 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0520 13:11:21.144713  892584 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0520 13:11:21.144728  892584 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0520 13:11:21.144732  892584 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0520 13:11:21.144741  892584 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0520 13:11:21.144745  892584 command_runner.go:130] > # Default value is set to true
	I0520 13:11:21.144751  892584 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0520 13:11:21.144756  892584 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0520 13:11:21.144762  892584 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0520 13:11:21.144767  892584 command_runner.go:130] > # Default value is set to 'false'
	I0520 13:11:21.144771  892584 command_runner.go:130] > # disable_hostport_mapping = false
	I0520 13:11:21.144777  892584 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0520 13:11:21.144782  892584 command_runner.go:130] > #
	I0520 13:11:21.144787  892584 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0520 13:11:21.144792  892584 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0520 13:11:21.144798  892584 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0520 13:11:21.144803  892584 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0520 13:11:21.144808  892584 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0520 13:11:21.144811  892584 command_runner.go:130] > [crio.image]
	I0520 13:11:21.144816  892584 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0520 13:11:21.144820  892584 command_runner.go:130] > # default_transport = "docker://"
	I0520 13:11:21.144828  892584 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0520 13:11:21.144836  892584 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0520 13:11:21.144839  892584 command_runner.go:130] > # global_auth_file = ""
	I0520 13:11:21.144844  892584 command_runner.go:130] > # The image used to instantiate infra containers.
	I0520 13:11:21.144848  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.144852  892584 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0520 13:11:21.144859  892584 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0520 13:11:21.144864  892584 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0520 13:11:21.144868  892584 command_runner.go:130] > # This option supports live configuration reload.
	I0520 13:11:21.144872  892584 command_runner.go:130] > # pause_image_auth_file = ""
	I0520 13:11:21.144877  892584 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0520 13:11:21.144883  892584 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0520 13:11:21.144888  892584 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0520 13:11:21.144893  892584 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0520 13:11:21.144897  892584 command_runner.go:130] > # pause_command = "/pause"
	I0520 13:11:21.144902  892584 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0520 13:11:21.144907  892584 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0520 13:11:21.144912  892584 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0520 13:11:21.144917  892584 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0520 13:11:21.144922  892584 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0520 13:11:21.144927  892584 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0520 13:11:21.144930  892584 command_runner.go:130] > # pinned_images = [
	I0520 13:11:21.144933  892584 command_runner.go:130] > # ]
	I0520 13:11:21.144939  892584 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0520 13:11:21.144944  892584 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0520 13:11:21.144952  892584 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0520 13:11:21.144957  892584 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0520 13:11:21.144962  892584 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0520 13:11:21.144965  892584 command_runner.go:130] > # signature_policy = ""
	I0520 13:11:21.144972  892584 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0520 13:11:21.144978  892584 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0520 13:11:21.144984  892584 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0520 13:11:21.144990  892584 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0520 13:11:21.144995  892584 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0520 13:11:21.145001  892584 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0520 13:11:21.145009  892584 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0520 13:11:21.145017  892584 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0520 13:11:21.145021  892584 command_runner.go:130] > # changing them here.
	I0520 13:11:21.145025  892584 command_runner.go:130] > # insecure_registries = [
	I0520 13:11:21.145028  892584 command_runner.go:130] > # ]
	I0520 13:11:21.145034  892584 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0520 13:11:21.145041  892584 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0520 13:11:21.145047  892584 command_runner.go:130] > # image_volumes = "mkdir"
	I0520 13:11:21.145052  892584 command_runner.go:130] > # Temporary directory to use for storing big files
	I0520 13:11:21.145056  892584 command_runner.go:130] > # big_files_temporary_dir = ""
	I0520 13:11:21.145063  892584 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0520 13:11:21.145066  892584 command_runner.go:130] > # CNI plugins.
	I0520 13:11:21.145071  892584 command_runner.go:130] > [crio.network]
	I0520 13:11:21.145076  892584 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0520 13:11:21.145081  892584 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0520 13:11:21.145085  892584 command_runner.go:130] > # cni_default_network = ""
	I0520 13:11:21.145090  892584 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0520 13:11:21.145094  892584 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0520 13:11:21.145099  892584 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0520 13:11:21.145102  892584 command_runner.go:130] > # plugin_dirs = [
	I0520 13:11:21.145105  892584 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0520 13:11:21.145108  892584 command_runner.go:130] > # ]
	I0520 13:11:21.145113  892584 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0520 13:11:21.145117  892584 command_runner.go:130] > [crio.metrics]
	I0520 13:11:21.145122  892584 command_runner.go:130] > # Globally enable or disable metrics support.
	I0520 13:11:21.145131  892584 command_runner.go:130] > enable_metrics = true
	I0520 13:11:21.145135  892584 command_runner.go:130] > # Specify enabled metrics collectors.
	I0520 13:11:21.145139  892584 command_runner.go:130] > # Per default all metrics are enabled.
	I0520 13:11:21.145146  892584 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0520 13:11:21.145154  892584 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0520 13:11:21.145160  892584 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0520 13:11:21.145166  892584 command_runner.go:130] > # metrics_collectors = [
	I0520 13:11:21.145169  892584 command_runner.go:130] > # 	"operations",
	I0520 13:11:21.145176  892584 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0520 13:11:21.145181  892584 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0520 13:11:21.145186  892584 command_runner.go:130] > # 	"operations_errors",
	I0520 13:11:21.145190  892584 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0520 13:11:21.145195  892584 command_runner.go:130] > # 	"image_pulls_by_name",
	I0520 13:11:21.145201  892584 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0520 13:11:21.145205  892584 command_runner.go:130] > # 	"image_pulls_failures",
	I0520 13:11:21.145209  892584 command_runner.go:130] > # 	"image_pulls_successes",
	I0520 13:11:21.145212  892584 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0520 13:11:21.145216  892584 command_runner.go:130] > # 	"image_layer_reuse",
	I0520 13:11:21.145221  892584 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0520 13:11:21.145229  892584 command_runner.go:130] > # 	"containers_oom_total",
	I0520 13:11:21.145233  892584 command_runner.go:130] > # 	"containers_oom",
	I0520 13:11:21.145236  892584 command_runner.go:130] > # 	"processes_defunct",
	I0520 13:11:21.145240  892584 command_runner.go:130] > # 	"operations_total",
	I0520 13:11:21.145243  892584 command_runner.go:130] > # 	"operations_latency_seconds",
	I0520 13:11:21.145248  892584 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0520 13:11:21.145252  892584 command_runner.go:130] > # 	"operations_errors_total",
	I0520 13:11:21.145256  892584 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0520 13:11:21.145263  892584 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0520 13:11:21.145267  892584 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0520 13:11:21.145273  892584 command_runner.go:130] > # 	"image_pulls_success_total",
	I0520 13:11:21.145278  892584 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0520 13:11:21.145284  892584 command_runner.go:130] > # 	"containers_oom_count_total",
	I0520 13:11:21.145289  892584 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0520 13:11:21.145295  892584 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0520 13:11:21.145298  892584 command_runner.go:130] > # ]
	I0520 13:11:21.145303  892584 command_runner.go:130] > # The port on which the metrics server will listen.
	I0520 13:11:21.145309  892584 command_runner.go:130] > # metrics_port = 9090
	I0520 13:11:21.145315  892584 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0520 13:11:21.145324  892584 command_runner.go:130] > # metrics_socket = ""
	I0520 13:11:21.145331  892584 command_runner.go:130] > # The certificate for the secure metrics server.
	I0520 13:11:21.145342  892584 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0520 13:11:21.145352  892584 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0520 13:11:21.145360  892584 command_runner.go:130] > # certificate on any modification event.
	I0520 13:11:21.145365  892584 command_runner.go:130] > # metrics_cert = ""
	I0520 13:11:21.145373  892584 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0520 13:11:21.145378  892584 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0520 13:11:21.145382  892584 command_runner.go:130] > # metrics_key = ""
	I0520 13:11:21.145388  892584 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0520 13:11:21.145394  892584 command_runner.go:130] > [crio.tracing]
	I0520 13:11:21.145399  892584 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0520 13:11:21.145402  892584 command_runner.go:130] > # enable_tracing = false
	I0520 13:11:21.145408  892584 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0520 13:11:21.145414  892584 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0520 13:11:21.145420  892584 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0520 13:11:21.145432  892584 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0520 13:11:21.145439  892584 command_runner.go:130] > # CRI-O NRI configuration.
	I0520 13:11:21.145445  892584 command_runner.go:130] > [crio.nri]
	I0520 13:11:21.145456  892584 command_runner.go:130] > # Globally enable or disable NRI.
	I0520 13:11:21.145461  892584 command_runner.go:130] > # enable_nri = false
	I0520 13:11:21.145468  892584 command_runner.go:130] > # NRI socket to listen on.
	I0520 13:11:21.145472  892584 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0520 13:11:21.145479  892584 command_runner.go:130] > # NRI plugin directory to use.
	I0520 13:11:21.145484  892584 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0520 13:11:21.145493  892584 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0520 13:11:21.145498  892584 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0520 13:11:21.145503  892584 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0520 13:11:21.145509  892584 command_runner.go:130] > # nri_disable_connections = false
	I0520 13:11:21.145515  892584 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0520 13:11:21.145522  892584 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0520 13:11:21.145530  892584 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0520 13:11:21.145534  892584 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0520 13:11:21.145539  892584 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0520 13:11:21.145545  892584 command_runner.go:130] > [crio.stats]
	I0520 13:11:21.145551  892584 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0520 13:11:21.145558  892584 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0520 13:11:21.145562  892584 command_runner.go:130] > # stats_collection_period = 0
	I0520 13:11:21.145597  892584 command_runner.go:130] ! time="2024-05-20 13:11:21.113853907Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0520 13:11:21.145611  892584 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0520 13:11:21.145713  892584 cni.go:84] Creating CNI manager for ""
	I0520 13:11:21.145729  892584 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 13:11:21.145746  892584 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:11:21.145767  892584 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-865571 NodeName:multinode-865571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:11:21.145913  892584 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-865571"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:11:21.145976  892584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:11:21.156009  892584 command_runner.go:130] > kubeadm
	I0520 13:11:21.156033  892584 command_runner.go:130] > kubectl
	I0520 13:11:21.156040  892584 command_runner.go:130] > kubelet
	I0520 13:11:21.156065  892584 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:11:21.156117  892584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:11:21.165394  892584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0520 13:11:21.182167  892584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:11:21.198614  892584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0520 13:11:21.215064  892584 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I0520 13:11:21.218894  892584 command_runner.go:130] > 192.168.39.78	control-plane.minikube.internal
	I0520 13:11:21.218964  892584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:11:21.350623  892584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:11:21.365944  892584 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571 for IP: 192.168.39.78
	I0520 13:11:21.365974  892584 certs.go:194] generating shared ca certs ...
	I0520 13:11:21.366009  892584 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:11:21.366186  892584 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 13:11:21.366224  892584 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 13:11:21.366234  892584 certs.go:256] generating profile certs ...
	I0520 13:11:21.366309  892584 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/client.key
	I0520 13:11:21.366369  892584 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.key.5cb03992
	I0520 13:11:21.366403  892584 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.key
	I0520 13:11:21.366414  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 13:11:21.366435  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 13:11:21.366447  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 13:11:21.366456  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 13:11:21.366466  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 13:11:21.366478  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 13:11:21.366487  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 13:11:21.366501  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 13:11:21.366559  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 13:11:21.366608  892584 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 13:11:21.366622  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 13:11:21.366655  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 13:11:21.366682  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:11:21.366703  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 13:11:21.366745  892584 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:11:21.366773  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem -> /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.366786  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.366799  892584 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.367419  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:11:21.391623  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:11:21.415017  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:11:21.438401  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 13:11:21.462297  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 13:11:21.485042  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 13:11:21.508111  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:11:21.531614  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/multinode-865571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:11:21.554150  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 13:11:21.576977  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 13:11:21.600009  892584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:11:21.622760  892584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:11:21.638799  892584 ssh_runner.go:195] Run: openssl version
	I0520 13:11:21.644300  892584 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 13:11:21.644461  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 13:11:21.655069  892584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.659890  892584 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.659996  892584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.660058  892584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 13:11:21.665490  892584 command_runner.go:130] > 51391683
	I0520 13:11:21.665549  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 13:11:21.675879  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 13:11:21.687234  892584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.691394  892584 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.691543  892584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.691592  892584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 13:11:21.697018  892584 command_runner.go:130] > 3ec20f2e
	I0520 13:11:21.697197  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:11:21.706509  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:11:21.717337  892584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.721495  892584 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.721637  892584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.721690  892584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:11:21.727046  892584 command_runner.go:130] > b5213941
	I0520 13:11:21.727225  892584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:11:21.737177  892584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:11:21.741576  892584 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:11:21.741602  892584 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 13:11:21.741611  892584 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0520 13:11:21.741622  892584 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 13:11:21.741631  892584 command_runner.go:130] > Access: 2024-05-20 13:05:11.460730216 +0000
	I0520 13:11:21.741639  892584 command_runner.go:130] > Modify: 2024-05-20 13:05:11.460730216 +0000
	I0520 13:11:21.741651  892584 command_runner.go:130] > Change: 2024-05-20 13:05:11.460730216 +0000
	I0520 13:11:21.741658  892584 command_runner.go:130] >  Birth: 2024-05-20 13:05:11.460730216 +0000
	I0520 13:11:21.741713  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:11:21.747463  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.747541  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:11:21.753175  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.753243  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:11:21.759102  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.759176  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:11:21.765951  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.766004  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:11:21.772016  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.772226  892584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:11:21.777726  892584 command_runner.go:130] > Certificate will not expire
	I0520 13:11:21.777965  892584 kubeadm.go:391] StartCluster: {Name:multinode-865571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-865571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.160 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:11:21.778085  892584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:11:21.778118  892584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:11:21.812887  892584 command_runner.go:130] > 9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063
	I0520 13:11:21.812926  892584 command_runner.go:130] > 49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380
	I0520 13:11:21.812933  892584 command_runner.go:130] > ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a
	I0520 13:11:21.812940  892584 command_runner.go:130] > 69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947
	I0520 13:11:21.812947  892584 command_runner.go:130] > 06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b
	I0520 13:11:21.812952  892584 command_runner.go:130] > 0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7
	I0520 13:11:21.812957  892584 command_runner.go:130] > 5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483
	I0520 13:11:21.813098  892584 command_runner.go:130] > e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66
	I0520 13:11:21.814377  892584 cri.go:89] found id: "9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063"
	I0520 13:11:21.814399  892584 cri.go:89] found id: "49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380"
	I0520 13:11:21.814404  892584 cri.go:89] found id: "ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a"
	I0520 13:11:21.814409  892584 cri.go:89] found id: "69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947"
	I0520 13:11:21.814413  892584 cri.go:89] found id: "06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b"
	I0520 13:11:21.814418  892584 cri.go:89] found id: "0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7"
	I0520 13:11:21.814422  892584 cri.go:89] found id: "5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483"
	I0520 13:11:21.814427  892584 cri.go:89] found id: "e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66"
	I0520 13:11:21.814431  892584 cri.go:89] found id: ""
	I0520 13:11:21.814471  892584 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.683005515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24d0c732-71ed-45a9-9634-cc89b67c33b2 name=/runtime.v1.RuntimeService/Version
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.684643808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fcbe840b-6f65-4d96-a703-95ddfa797656 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.685232979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210906685199524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcbe840b-6f65-4d96-a703-95ddfa797656 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.685860034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fdab4b8-efef-47a2-a726-1cfffec14dd5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.685965530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fdab4b8-efef-47a2-a726-1cfffec14dd5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.686553090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f4a0de9fef6e7eb40bbae0932c0b136d37563d0d273170321cb141d28f63823,PodSandboxId:cf6dd7caebc6815f5cc7d2c39e045a53f28cf68ee717dd51e9412ebbab25777d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716210722437237035,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95,PodSandboxId:1e895fbf4fd2cd9228480eb84b885b285bed03c87e0cf398f167fca9686cb5be,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716210688936835109,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2,PodSandboxId:db2c0c5a5df49c7cac506459071fb592436354190245b99b756b98004bba0f6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716210688896796641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5,PodSandboxId:4008986e60e56362fa42182b91af214be6591c4a47a18868dc02951ec151695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716210688812495630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,},Annotations:map[string]
string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94037199ce629680785f1e448a8913e82e2f4426efc0940ed47d6cf365a5c0ce,PodSandboxId:1262dc3c426275b4bfd5dbd429a3c530eafcd626838226f1a931d49bc0ff86fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716210688735139812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.ku
bernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1,PodSandboxId:77bbfc6a88c3b294519da665cff3dc98ded6fe9cb8d861c694844f066ffb3537,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210683945583978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab,PodSandboxId:33d2ad28bdad12aab8711a3a5e632a7f39bad95a6ea41d70feaef10727df08a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210683870216754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7
02264,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed,PodSandboxId:2db53cc70394a7b78dd4ffab8cc12c10e3c78b7b9852a34ee6bf3aa76b4db655,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716210683869191601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099,PodSandboxId:ebacc9eb00e839935555b01c0ef909035e8579ac2a974dbe9982b8f1dd4fb61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210683792239408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.container.hash: 321d3fc8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a58ddb4bc5ae1e43a201f39acb74b3fc8eb3fc621b2ae13717afc9bd73ff76,PodSandboxId:2fae850c319d2831daba8976c8d688aa5a415f6c4b50f21360dfc1771fc69c2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716210381937703531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380,PodSandboxId:ee49075e2aa277e55a64a2d1a1ab70d7bc2e5333fcdb399d3d805563b63a5c6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716210337721235776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.kubernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063,PodSandboxId:f64711606f5e8e7959201c4168a3b44e2a179bd249814ed1dc122ca1fbee5d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716210337726240942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947,PodSandboxId:cd500b16c8cb83efdfd493cef3a827c9600057dbc63d4f7b7e0b681b63adb8f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716210336013827850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a,PodSandboxId:4901288e3b49a2b62ace04da2fefb6461e54d99dba437822aa935c2310ced8f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716210336035187991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18
-21245055c769,},Annotations:map[string]string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7,PodSandboxId:dc55454121a3d0ceee2325b613b17580033d98a74270d9d3b937a953f196af5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716210315376725938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b,PodSandboxId:c9f1dcd9b10860776667a2e5c4934b06f2b34696b4308868b9241c5d82e8273c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210315418737817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483,PodSandboxId:b54986d6b3c407e4fbf53eec3fcfe3d9eb9a6a8063de4ae7f03ed0b2ce3387f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716210315371140949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},An
notations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66,PodSandboxId:7b72f9c76df4c75fe8b00188c1b48201c17d787517ebc2af306e3038acf01fc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716210315267220057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 321d3fc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fdab4b8-efef-47a2-a726-1cfffec14dd5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.733884034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3051959b-7669-4dd8-94d6-6b42eeb5c099 name=/runtime.v1.RuntimeService/Version
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.733960974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3051959b-7669-4dd8-94d6-6b42eeb5c099 name=/runtime.v1.RuntimeService/Version
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.736022195Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4c94ffc-bba6-41aa-994c-60c7051196f0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.736663439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210906736631711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4c94ffc-bba6-41aa-994c-60c7051196f0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.737735915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aefd393-2426-4818-9a4a-48c975cdbafb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.737841120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aefd393-2426-4818-9a4a-48c975cdbafb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.738326365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f4a0de9fef6e7eb40bbae0932c0b136d37563d0d273170321cb141d28f63823,PodSandboxId:cf6dd7caebc6815f5cc7d2c39e045a53f28cf68ee717dd51e9412ebbab25777d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716210722437237035,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95,PodSandboxId:1e895fbf4fd2cd9228480eb84b885b285bed03c87e0cf398f167fca9686cb5be,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716210688936835109,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2,PodSandboxId:db2c0c5a5df49c7cac506459071fb592436354190245b99b756b98004bba0f6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716210688896796641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5,PodSandboxId:4008986e60e56362fa42182b91af214be6591c4a47a18868dc02951ec151695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716210688812495630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,},Annotations:map[string]
string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94037199ce629680785f1e448a8913e82e2f4426efc0940ed47d6cf365a5c0ce,PodSandboxId:1262dc3c426275b4bfd5dbd429a3c530eafcd626838226f1a931d49bc0ff86fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716210688735139812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.ku
bernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1,PodSandboxId:77bbfc6a88c3b294519da665cff3dc98ded6fe9cb8d861c694844f066ffb3537,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210683945583978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab,PodSandboxId:33d2ad28bdad12aab8711a3a5e632a7f39bad95a6ea41d70feaef10727df08a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210683870216754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7
02264,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed,PodSandboxId:2db53cc70394a7b78dd4ffab8cc12c10e3c78b7b9852a34ee6bf3aa76b4db655,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716210683869191601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099,PodSandboxId:ebacc9eb00e839935555b01c0ef909035e8579ac2a974dbe9982b8f1dd4fb61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210683792239408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.container.hash: 321d3fc8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a58ddb4bc5ae1e43a201f39acb74b3fc8eb3fc621b2ae13717afc9bd73ff76,PodSandboxId:2fae850c319d2831daba8976c8d688aa5a415f6c4b50f21360dfc1771fc69c2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716210381937703531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380,PodSandboxId:ee49075e2aa277e55a64a2d1a1ab70d7bc2e5333fcdb399d3d805563b63a5c6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716210337721235776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.kubernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063,PodSandboxId:f64711606f5e8e7959201c4168a3b44e2a179bd249814ed1dc122ca1fbee5d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716210337726240942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947,PodSandboxId:cd500b16c8cb83efdfd493cef3a827c9600057dbc63d4f7b7e0b681b63adb8f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716210336013827850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a,PodSandboxId:4901288e3b49a2b62ace04da2fefb6461e54d99dba437822aa935c2310ced8f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716210336035187991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18
-21245055c769,},Annotations:map[string]string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7,PodSandboxId:dc55454121a3d0ceee2325b613b17580033d98a74270d9d3b937a953f196af5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716210315376725938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b,PodSandboxId:c9f1dcd9b10860776667a2e5c4934b06f2b34696b4308868b9241c5d82e8273c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210315418737817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483,PodSandboxId:b54986d6b3c407e4fbf53eec3fcfe3d9eb9a6a8063de4ae7f03ed0b2ce3387f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716210315371140949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},An
notations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66,PodSandboxId:7b72f9c76df4c75fe8b00188c1b48201c17d787517ebc2af306e3038acf01fc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716210315267220057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 321d3fc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6aefd393-2426-4818-9a4a-48c975cdbafb name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.783228689Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8c3113fb-6c5d-4ae6-ac9c-c41aad438096 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.783780500Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cf6dd7caebc6815f5cc7d2c39e045a53f28cf68ee717dd51e9412ebbab25777d,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-c8hj2,Uid:55131023-9fdc-4c5b-86f3-0963e13b54c2,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716210722289893401,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:11:28.116288606Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db2c0c5a5df49c7cac506459071fb592436354190245b99b756b98004bba0f6c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-cck8j,Uid:2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1716210688614205395,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:11:28.116277868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e895fbf4fd2cd9228480eb84b885b285bed03c87e0cf398f167fca9686cb5be,Metadata:&PodSandboxMetadata{Name:kindnet-p69ft,Uid:a05815a1-89f4-4adf-88f3-d85b1c969cd6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716210688498049808,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-05-20T13:11:28.116285802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4008986e60e56362fa42182b91af214be6591c4a47a18868dc02951ec151695b,Metadata:&PodSandboxMetadata{Name:kube-proxy-z8dbs,Uid:826e8825-487e-4a9e-8a18-21245055c769,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716210688488250170,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:11:28.116290317Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1262dc3c426275b4bfd5dbd429a3c530eafcd626838226f1a931d49bc0ff86fa,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b9037bf4-865b-4ef6-8138-1a3c6a8d1500,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1716210688447654328,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-20T13:11:28.116292248Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33d2ad28bdad12aab8711a3a5e632a7f39bad95a6ea41d70feaef10727df08a0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-865571,Uid:aefcc152b93d64e03162596bcb208fb1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716210683645299923,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.78:8443,kubernetes.io/config.hash: aefcc152b93d64e03162596bcb208fb1,kubernetes.io/config.seen: 2024-05-20T13:11:23.107992644Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2db53cc70394a7b78dd4ffab8cc12c10e3c
78b7b9852a34ee6bf3aa76b4db655,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-865571,Uid:bea551ee8f74628c5c3ff37e899e26a0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716210683643565706,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bea551ee8f74628c5c3ff37e899e26a0,kubernetes.io/config.seen: 2024-05-20T13:11:23.107996456Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:77bbfc6a88c3b294519da665cff3dc98ded6fe9cb8d861c694844f066ffb3537,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-865571,Uid:a28ed0baba5785958bfc3b772e1e289e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716210683642961226,Labels:map[string]string{component: kube-scheduler,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a28ed0baba5785958bfc3b772e1e289e,kubernetes.io/config.seen: 2024-05-20T13:11:23.107997301Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ebacc9eb00e839935555b01c0ef909035e8579ac2a974dbe9982b8f1dd4fb61d,Metadata:&PodSandboxMetadata{Name:etcd-multinode-865571,Uid:001f5f73c09833ac52c0fd669fee7361,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716210683622970170,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.78:2379,kubernete
s.io/config.hash: 001f5f73c09833ac52c0fd669fee7361,kubernetes.io/config.seen: 2024-05-20T13:11:23.107998204Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2fae850c319d2831daba8976c8d688aa5a415f6c4b50f21360dfc1771fc69c2f,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-c8hj2,Uid:55131023-9fdc-4c5b-86f3-0963e13b54c2,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716210380768576387,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:06:20.453257978Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee49075e2aa277e55a64a2d1a1ab70d7bc2e5333fcdb399d3d805563b63a5c6f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b9037bf4-865b-4ef6-8138-1a3c6a8d1500,Namespace:kube-system,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1716210337551851420,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\"
:\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-20T13:05:37.243595951Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f64711606f5e8e7959201c4168a3b44e2a179bd249814ed1dc122ca1fbee5d01,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-cck8j,Uid:2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716210337543915465,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:05:37.237364415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4901288e3b49a2b62ace04da2fefb6461e54d99dba437822aa935c2310ced8f6,Metadata:&PodSandboxMetadata{Name:kube-proxy-z8dbs,Uid:826e8825-487e-4a9e-8a18-21245055c769,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716210335908671317,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:05:34.702471052Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd500b16c8cb83efdfd493cef3a827c9600057dbc63d4f7b7e0b681b63adb8f5,Metadata:&PodSandboxMetadata{Name:kindnet-p69ft,Uid:a05815a1-89f4-4adf-88f3-d85b1c969cd6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716210335630687998,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,k8s-app: kindnet,pod-t
emplate-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:05:34.725004345Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9f1dcd9b10860776667a2e5c4934b06f2b34696b4308868b9241c5d82e8273c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-865571,Uid:aefcc152b93d64e03162596bcb208fb1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716210315173664558,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.78:8443,kubernetes.io/config.hash: aefcc152b93d64e03162596bcb208fb1,kubernetes.io/config.seen: 2024-05-20T13:05:14.685203317Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b54986d6b3c407e4f
bf53eec3fcfe3d9eb9a6a8063de4ae7f03ed0b2ce3387f8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-865571,Uid:bea551ee8f74628c5c3ff37e899e26a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716210315161993510,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bea551ee8f74628c5c3ff37e899e26a0,kubernetes.io/config.seen: 2024-05-20T13:05:14.685206580Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dc55454121a3d0ceee2325b613b17580033d98a74270d9d3b937a953f196af5d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-865571,Uid:a28ed0baba5785958bfc3b772e1e289e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716210315139231184,Labels:map[string]string{co
mponent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a28ed0baba5785958bfc3b772e1e289e,kubernetes.io/config.seen: 2024-05-20T13:05:14.685211357Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7b72f9c76df4c75fe8b00188c1b48201c17d787517ebc2af306e3038acf01fc1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-865571,Uid:001f5f73c09833ac52c0fd669fee7361,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716210315124536353,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://1
92.168.39.78:2379,kubernetes.io/config.hash: 001f5f73c09833ac52c0fd669fee7361,kubernetes.io/config.seen: 2024-05-20T13:05:14.685200147Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8c3113fb-6c5d-4ae6-ac9c-c41aad438096 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.785295953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac5fd378-2aa2-4450-8302-0abb3647a899 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.785367915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac5fd378-2aa2-4450-8302-0abb3647a899 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.786340710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f4a0de9fef6e7eb40bbae0932c0b136d37563d0d273170321cb141d28f63823,PodSandboxId:cf6dd7caebc6815f5cc7d2c39e045a53f28cf68ee717dd51e9412ebbab25777d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716210722437237035,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95,PodSandboxId:1e895fbf4fd2cd9228480eb84b885b285bed03c87e0cf398f167fca9686cb5be,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716210688936835109,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2,PodSandboxId:db2c0c5a5df49c7cac506459071fb592436354190245b99b756b98004bba0f6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716210688896796641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5,PodSandboxId:4008986e60e56362fa42182b91af214be6591c4a47a18868dc02951ec151695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716210688812495630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,},Annotations:map[string]
string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94037199ce629680785f1e448a8913e82e2f4426efc0940ed47d6cf365a5c0ce,PodSandboxId:1262dc3c426275b4bfd5dbd429a3c530eafcd626838226f1a931d49bc0ff86fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716210688735139812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.ku
bernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1,PodSandboxId:77bbfc6a88c3b294519da665cff3dc98ded6fe9cb8d861c694844f066ffb3537,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210683945583978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab,PodSandboxId:33d2ad28bdad12aab8711a3a5e632a7f39bad95a6ea41d70feaef10727df08a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210683870216754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7
02264,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed,PodSandboxId:2db53cc70394a7b78dd4ffab8cc12c10e3c78b7b9852a34ee6bf3aa76b4db655,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716210683869191601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099,PodSandboxId:ebacc9eb00e839935555b01c0ef909035e8579ac2a974dbe9982b8f1dd4fb61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210683792239408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.container.hash: 321d3fc8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a58ddb4bc5ae1e43a201f39acb74b3fc8eb3fc621b2ae13717afc9bd73ff76,PodSandboxId:2fae850c319d2831daba8976c8d688aa5a415f6c4b50f21360dfc1771fc69c2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716210381937703531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380,PodSandboxId:ee49075e2aa277e55a64a2d1a1ab70d7bc2e5333fcdb399d3d805563b63a5c6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716210337721235776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.kubernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063,PodSandboxId:f64711606f5e8e7959201c4168a3b44e2a179bd249814ed1dc122ca1fbee5d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716210337726240942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947,PodSandboxId:cd500b16c8cb83efdfd493cef3a827c9600057dbc63d4f7b7e0b681b63adb8f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716210336013827850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a,PodSandboxId:4901288e3b49a2b62ace04da2fefb6461e54d99dba437822aa935c2310ced8f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716210336035187991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18
-21245055c769,},Annotations:map[string]string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7,PodSandboxId:dc55454121a3d0ceee2325b613b17580033d98a74270d9d3b937a953f196af5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716210315376725938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b,PodSandboxId:c9f1dcd9b10860776667a2e5c4934b06f2b34696b4308868b9241c5d82e8273c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210315418737817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483,PodSandboxId:b54986d6b3c407e4fbf53eec3fcfe3d9eb9a6a8063de4ae7f03ed0b2ce3387f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716210315371140949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},An
notations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66,PodSandboxId:7b72f9c76df4c75fe8b00188c1b48201c17d787517ebc2af306e3038acf01fc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716210315267220057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 321d3fc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac5fd378-2aa2-4450-8302-0abb3647a899 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.790709079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19da9aa3-966c-492b-9dff-dd9372c130ee name=/runtime.v1.RuntimeService/Version
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.790781083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19da9aa3-966c-492b-9dff-dd9372c130ee name=/runtime.v1.RuntimeService/Version
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.792922724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f52575ae-e7d3-40f4-a74c-f469eb1ca58f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.793599700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716210906793572343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f52575ae-e7d3-40f4-a74c-f469eb1ca58f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.794148296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af04d1e2-919c-4ba0-b361-1f76f1ca1bb2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.794238457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af04d1e2-919c-4ba0-b361-1f76f1ca1bb2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:15:06 multinode-865571 crio[2863]: time="2024-05-20 13:15:06.794735449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f4a0de9fef6e7eb40bbae0932c0b136d37563d0d273170321cb141d28f63823,PodSandboxId:cf6dd7caebc6815f5cc7d2c39e045a53f28cf68ee717dd51e9412ebbab25777d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716210722437237035,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95,PodSandboxId:1e895fbf4fd2cd9228480eb84b885b285bed03c87e0cf398f167fca9686cb5be,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716210688936835109,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2,PodSandboxId:db2c0c5a5df49c7cac506459071fb592436354190245b99b756b98004bba0f6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716210688896796641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5,PodSandboxId:4008986e60e56362fa42182b91af214be6591c4a47a18868dc02951ec151695b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716210688812495630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18-21245055c769,},Annotations:map[string]
string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94037199ce629680785f1e448a8913e82e2f4426efc0940ed47d6cf365a5c0ce,PodSandboxId:1262dc3c426275b4bfd5dbd429a3c530eafcd626838226f1a931d49bc0ff86fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716210688735139812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.ku
bernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1,PodSandboxId:77bbfc6a88c3b294519da665cff3dc98ded6fe9cb8d861c694844f066ffb3537,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716210683945583978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab,PodSandboxId:33d2ad28bdad12aab8711a3a5e632a7f39bad95a6ea41d70feaef10727df08a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716210683870216754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7
02264,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed,PodSandboxId:2db53cc70394a7b78dd4ffab8cc12c10e3c78b7b9852a34ee6bf3aa76b4db655,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716210683869191601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099,PodSandboxId:ebacc9eb00e839935555b01c0ef909035e8579ac2a974dbe9982b8f1dd4fb61d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716210683792239408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.container.hash: 321d3fc8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a58ddb4bc5ae1e43a201f39acb74b3fc8eb3fc621b2ae13717afc9bd73ff76,PodSandboxId:2fae850c319d2831daba8976c8d688aa5a415f6c4b50f21360dfc1771fc69c2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716210381937703531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-c8hj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55131023-9fdc-4c5b-86f3-0963e13b54c2,},Annotations:map[string]string{io.kubernetes.container.hash: 20e2af6b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49209f7e35c79d85d232bd4db7b851e73cfbd1810f83be92106c2f92d736e380,PodSandboxId:ee49075e2aa277e55a64a2d1a1ab70d7bc2e5333fcdb399d3d805563b63a5c6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716210337721235776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9037bf4-865b-4ef6-8138-1a3c6a8d1500,},Annotations:map[string]string{io.kubernetes.container.hash: bdf26ab6,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063,PodSandboxId:f64711606f5e8e7959201c4168a3b44e2a179bd249814ed1dc122ca1fbee5d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716210337726240942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cck8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdfbdfb-82cd-402d-9ec5-42adc84fa06c,},Annotations:map[string]string{io.kubernetes.container.hash: e3feef09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947,PodSandboxId:cd500b16c8cb83efdfd493cef3a827c9600057dbc63d4f7b7e0b681b63adb8f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716210336013827850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p69ft,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: a05815a1-89f4-4adf-88f3-d85b1c969cd6,},Annotations:map[string]string{io.kubernetes.container.hash: f737f022,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a,PodSandboxId:4901288e3b49a2b62ace04da2fefb6461e54d99dba437822aa935c2310ced8f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716210336035187991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 826e8825-487e-4a9e-8a18
-21245055c769,},Annotations:map[string]string{io.kubernetes.container.hash: bfcae81e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7,PodSandboxId:dc55454121a3d0ceee2325b613b17580033d98a74270d9d3b937a953f196af5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716210315376725938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28ed0baba5785958bfc3b772e1e289e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b,PodSandboxId:c9f1dcd9b10860776667a2e5c4934b06f2b34696b4308868b9241c5d82e8273c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716210315418737817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefcc152b93d64e03162596bcb208fb1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483,PodSandboxId:b54986d6b3c407e4fbf53eec3fcfe3d9eb9a6a8063de4ae7f03ed0b2ce3387f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716210315371140949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea551ee8f74628c5c3ff37e899e26a0,},An
notations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66,PodSandboxId:7b72f9c76df4c75fe8b00188c1b48201c17d787517ebc2af306e3038acf01fc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716210315267220057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-865571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001f5f73c09833ac52c0fd669fee7361,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 321d3fc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af04d1e2-919c-4ba0-b361-1f76f1ca1bb2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f4a0de9fef6e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   cf6dd7caebc68       busybox-fc5497c4f-c8hj2
	ca6e5c0b3bc62       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   1e895fbf4fd2c       kindnet-p69ft
	bcfb651082e6f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   db2c0c5a5df49       coredns-7db6d8ff4d-cck8j
	25ca0eed2cac1       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   4008986e60e56       kube-proxy-z8dbs
	94037199ce629       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   1262dc3c42627       storage-provisioner
	cf4d2cd83a9cd       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   77bbfc6a88c3b       kube-scheduler-multinode-865571
	c3686a8518528       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   33d2ad28bdad1       kube-apiserver-multinode-865571
	3d4a3b19bb8e9       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   2db53cc70394a       kube-controller-manager-multinode-865571
	00722f6248827       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   ebacc9eb00e83       etcd-multinode-865571
	26a58ddb4bc5a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   2fae850c319d2       busybox-fc5497c4f-c8hj2
	9b374e240a6cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   f64711606f5e8       coredns-7db6d8ff4d-cck8j
	49209f7e35c79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   ee49075e2aa27       storage-provisioner
	ae13e8e8db5a4       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   4901288e3b49a       kube-proxy-z8dbs
	69415b4290f14       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   cd500b16c8cb8       kindnet-p69ft
	06e853ffdd1f3       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            0                   c9f1dcd9b1086       kube-apiserver-multinode-865571
	0332c5cdab59d       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      9 minutes ago       Exited              kube-scheduler            0                   dc55454121a3d       kube-scheduler-multinode-865571
	5e94c8b3558a8       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   0                   b54986d6b3c40       kube-controller-manager-multinode-865571
	e379bbf0ff586       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   7b72f9c76df4c       etcd-multinode-865571
	
	
	==> coredns [9b374e240a6cc10cc2670eb79021df72aa82a1e5c711dc589b7286e32b846063] <==
	[INFO] 10.244.0.3:58828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001650207s
	[INFO] 10.244.0.3:53349 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101685s
	[INFO] 10.244.0.3:53577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00005782s
	[INFO] 10.244.0.3:54163 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000932243s
	[INFO] 10.244.0.3:43945 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000134913s
	[INFO] 10.244.0.3:47395 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053743s
	[INFO] 10.244.0.3:32849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080995s
	[INFO] 10.244.1.2:43178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121678s
	[INFO] 10.244.1.2:56268 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075606s
	[INFO] 10.244.1.2:44888 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065126s
	[INFO] 10.244.1.2:57864 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097621s
	[INFO] 10.244.0.3:55327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104835s
	[INFO] 10.244.0.3:55984 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064871s
	[INFO] 10.244.0.3:47136 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094416s
	[INFO] 10.244.0.3:42003 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049721s
	[INFO] 10.244.1.2:44612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115061s
	[INFO] 10.244.1.2:33740 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179295s
	[INFO] 10.244.1.2:49252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097268s
	[INFO] 10.244.1.2:42925 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000201463s
	[INFO] 10.244.0.3:51539 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074001s
	[INFO] 10.244.0.3:58314 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000056116s
	[INFO] 10.244.0.3:52703 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000039389s
	[INFO] 10.244.0.3:49801 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000029311s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bcfb651082e6fa88f41f7e8ff52504e1818e577364b1f1aa445e14fb5480b3d2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51456 - 39602 "HINFO IN 7001816019168731813.6639323213373340617. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018385616s
	
	
	==> describe nodes <==
	Name:               multinode-865571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-865571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=multinode-865571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_05_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:05:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-865571
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:15:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:11:27 +0000   Mon, 20 May 2024 13:05:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:11:27 +0000   Mon, 20 May 2024 13:05:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:11:27 +0000   Mon, 20 May 2024 13:05:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:11:27 +0000   Mon, 20 May 2024 13:05:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-865571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fc2d737a0984208b366b4fc8aa543ec
	  System UUID:                6fc2d737-a098-4208-b366-b4fc8aa543ec
	  Boot ID:                    98d576f3-e9e6-429a-b515-0222cfdb89ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-c8hj2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m47s
	  kube-system                 coredns-7db6d8ff4d-cck8j                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m33s
	  kube-system                 etcd-multinode-865571                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m47s
	  kube-system                 kindnet-p69ft                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m33s
	  kube-system                 kube-apiserver-multinode-865571             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 kube-controller-manager-multinode-865571    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 kube-proxy-z8dbs                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m33s
	  kube-system                 kube-scheduler-multinode-865571             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m30s                  kube-proxy       
	  Normal  Starting                 3m37s                  kube-proxy       
	  Normal  Starting                 9m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m53s (x8 over 9m53s)  kubelet          Node multinode-865571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m53s (x8 over 9m53s)  kubelet          Node multinode-865571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m53s (x7 over 9m53s)  kubelet          Node multinode-865571 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m47s                  kubelet          Node multinode-865571 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m47s                  kubelet          Node multinode-865571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     9m47s                  kubelet          Node multinode-865571 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m47s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m34s                  node-controller  Node multinode-865571 event: Registered Node multinode-865571 in Controller
	  Normal  NodeReady                9m30s                  kubelet          Node multinode-865571 status is now: NodeReady
	  Normal  Starting                 3m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s (x8 over 3m44s)  kubelet          Node multinode-865571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x8 over 3m44s)  kubelet          Node multinode-865571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x7 over 3m44s)  kubelet          Node multinode-865571 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m28s                  node-controller  Node multinode-865571 event: Registered Node multinode-865571 in Controller
	
	
	Name:               multinode-865571-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-865571-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=multinode-865571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T13_12_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:12:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-865571-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:12:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 13:12:38 +0000   Mon, 20 May 2024 13:13:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 13:12:38 +0000   Mon, 20 May 2024 13:13:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 13:12:38 +0000   Mon, 20 May 2024 13:13:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 13:12:38 +0000   Mon, 20 May 2024 13:13:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    multinode-865571-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 da55c52cb4d14d08a06e12ee1db3a0fe
	  System UUID:                da55c52c-b4d1-4d08-a06e-12ee1db3a0fe
	  Boot ID:                    2c3d7f87-9f65-413a-a97a-d130b737936f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d52mq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                 kindnet-zp4xs              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m58s
	  kube-system                 kube-proxy-pntzt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m54s                  kube-proxy       
	  Normal  Starting                 8m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m58s (x2 over 8m58s)  kubelet          Node multinode-865571-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x2 over 8m58s)  kubelet          Node multinode-865571-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x2 over 8m58s)  kubelet          Node multinode-865571-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m49s                  kubelet          Node multinode-865571-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)        kubelet          Node multinode-865571-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)        kubelet          Node multinode-865571-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)        kubelet          Node multinode-865571-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m53s                  kubelet          Node multinode-865571-m02 status is now: NodeReady
	  Normal  NodeNotReady             107s                   node-controller  Node multinode-865571-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059267] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059472] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.188674] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.110989] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.261702] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.131122] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.719255] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.063826] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.982838] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.079635] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.576272] systemd-fstab-generator[1490]: Ignoring "noauto" option for root device
	[  +0.100968] kauditd_printk_skb: 21 callbacks suppressed
	[May20 13:06] kauditd_printk_skb: 82 callbacks suppressed
	[May20 13:11] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.142158] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.159805] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.148025] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.276962] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +6.977805] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[  +0.083228] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.571542] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[  +5.671934] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.659013] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.314619] systemd-fstab-generator[3875]: Ignoring "noauto" option for root device
	[May20 13:12] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [00722f6248827ddfbb138758abd87b9eabc088f85694c8be104efe50f73d2099] <==
	{"level":"info","ts":"2024-05-20T13:11:24.159856Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:11:24.159865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:11:24.160083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 switched to configuration voters=(9511011272858222243)"}
	{"level":"info","ts":"2024-05-20T13:11:24.160148Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","added-peer-id":"83fde65c75733ea3","added-peer-peer-urls":["https://192.168.39.78:2380"]}
	{"level":"info","ts":"2024-05-20T13:11:24.160242Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:11:24.160281Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:11:24.175124Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:11:24.175308Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"83fde65c75733ea3","initial-advertise-peer-urls":["https://192.168.39.78:2380"],"listen-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.78:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T13:11:24.175358Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T13:11:24.175909Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-20T13:11:24.175941Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-20T13:11:25.620486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T13:11:25.620522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T13:11:25.620571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 2"}
	{"level":"info","ts":"2024-05-20T13:11:25.620585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T13:11:25.620591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgVoteResp from 83fde65c75733ea3 at term 3"}
	{"level":"info","ts":"2024-05-20T13:11:25.62061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T13:11:25.62062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83fde65c75733ea3 elected leader 83fde65c75733ea3 at term 3"}
	{"level":"info","ts":"2024-05-20T13:11:25.623313Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"83fde65c75733ea3","local-member-attributes":"{Name:multinode-865571 ClientURLs:[https://192.168.39.78:2379]}","request-path":"/0/members/83fde65c75733ea3/attributes","cluster-id":"254f9db842b1870b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T13:11:25.623479Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:11:25.623565Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:11:25.623595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T13:11:25.623509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:11:25.62576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.78:2379"}
	{"level":"info","ts":"2024-05-20T13:11:25.625886Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e379bbf0ff5861315fce8d86a6ce9457062a653d0080d86ce9df857a49736f66] <==
	{"level":"info","ts":"2024-05-20T13:05:15.951566Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:05:15.951597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T13:05:15.955473Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:05:15.9556Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:05:15.955693Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:05:15.957146Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.78:2379"}
	{"level":"warn","ts":"2024-05-20T13:06:09.860745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.722161ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4513609126419432577 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-865571-m02.17d1343cff565fad\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-865571-m02.17d1343cff565fad\" value_size:646 lease:4513609126419431663 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T13:06:09.861334Z","caller":"traceutil/trace.go:171","msg":"trace[1115152632] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"249.956894ms","start":"2024-05-20T13:06:09.611357Z","end":"2024-05-20T13:06:09.861314Z","steps":["trace[1115152632] 'process raft request'  (duration: 89.08972ms)","trace[1115152632] 'compare'  (duration: 159.620008ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:06:09.861572Z","caller":"traceutil/trace.go:171","msg":"trace[1266436582] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"177.534117ms","start":"2024-05-20T13:06:09.684028Z","end":"2024-05-20T13:06:09.861562Z","steps":["trace[1266436582] 'process raft request'  (duration: 177.210527ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:06:52.82417Z","caller":"traceutil/trace.go:171","msg":"trace[1236714383] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"249.150681ms","start":"2024-05-20T13:06:52.574981Z","end":"2024-05-20T13:06:52.824132Z","steps":["trace[1236714383] 'process raft request'  (duration: 227.208873ms)","trace[1236714383] 'compare'  (duration: 21.721199ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:06:52.825718Z","caller":"traceutil/trace.go:171","msg":"trace[197194243] linearizableReadLoop","detail":"{readStateIndex:637; appliedIndex:635; }","duration":"189.261631ms","start":"2024-05-20T13:06:52.636445Z","end":"2024-05-20T13:06:52.825706Z","steps":["trace[197194243] 'read index received'  (duration: 165.859729ms)","trace[197194243] 'applied index is now lower than readState.Index'  (duration: 23.401433ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T13:06:52.825849Z","caller":"traceutil/trace.go:171","msg":"trace[1707923856] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"200.10424ms","start":"2024-05-20T13:06:52.625738Z","end":"2024-05-20T13:06:52.825842Z","steps":["trace[1707923856] 'process raft request'  (duration: 198.474205ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:06:52.826113Z","caller":"traceutil/trace.go:171","msg":"trace[755531351] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"167.449051ms","start":"2024-05-20T13:06:52.658658Z","end":"2024-05-20T13:06:52.826107Z","steps":["trace[755531351] 'process raft request'  (duration: 165.594197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T13:06:52.826425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.844398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-865571-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-20T13:06:52.826491Z","caller":"traceutil/trace.go:171","msg":"trace[1260058095] range","detail":"{range_begin:/registry/minions/multinode-865571-m03; range_end:; response_count:1; response_revision:604; }","duration":"190.117555ms","start":"2024-05-20T13:06:52.636359Z","end":"2024-05-20T13:06:52.826477Z","steps":["trace[1260058095] 'agreement among raft nodes before linearized reading'  (duration: 189.904815ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T13:09:42.288119Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-20T13:09:42.288286Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-865571","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	{"level":"warn","ts":"2024-05-20T13:09:42.288507Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T13:09:42.288592Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T13:09:42.322223Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T13:09:42.32231Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T13:09:42.323983Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"83fde65c75733ea3","current-leader-member-id":"83fde65c75733ea3"}
	{"level":"info","ts":"2024-05-20T13:09:42.328078Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-20T13:09:42.328182Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-20T13:09:42.328208Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-865571","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> kernel <==
	 13:15:07 up 10 min,  0 users,  load average: 1.88, 1.06, 0.46
	Linux multinode-865571 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [69415b4290f146f86a9dcfd2ee8941f303dbe47717f101940d28be0e3b62a947] <==
	I0520 13:08:57.148918       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:09:07.162542       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:09:07.162730       1 main.go:227] handling current node
	I0520 13:09:07.162772       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:09:07.162802       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:09:07.162950       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:09:07.162990       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:09:17.167349       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:09:17.167478       1 main.go:227] handling current node
	I0520 13:09:17.167503       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:09:17.167522       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:09:17.167635       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:09:17.167655       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:09:27.180589       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:09:27.180715       1 main.go:227] handling current node
	I0520 13:09:27.180755       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:09:27.180774       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:09:27.180934       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:09:27.181028       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	I0520 13:09:37.185724       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:09:37.185959       1 main.go:227] handling current node
	I0520 13:09:37.185997       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:09:37.186018       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:09:37.186133       1 main.go:223] Handling node with IPs: map[192.168.39.160:{}]
	I0520 13:09:37.186152       1 main.go:250] Node multinode-865571-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ca6e5c0b3bc623bd99b413c1bbba8235aff90d0fc19a01c2dc0e3f073d9a2f95] <==
	I0520 13:13:59.912627       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:14:09.922490       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:14:09.922601       1 main.go:227] handling current node
	I0520 13:14:09.922628       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:14:09.922718       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:14:19.935299       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:14:19.935342       1 main.go:227] handling current node
	I0520 13:14:19.935351       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:14:19.935357       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:14:29.947888       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:14:29.947932       1 main.go:227] handling current node
	I0520 13:14:29.947956       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:14:29.947962       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:14:39.955252       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:14:39.955536       1 main.go:227] handling current node
	I0520 13:14:39.955645       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:14:39.955673       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:14:49.962046       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:14:49.962090       1 main.go:227] handling current node
	I0520 13:14:49.962102       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:14:49.962110       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	I0520 13:14:59.966083       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0520 13:14:59.966463       1 main.go:227] handling current node
	I0520 13:14:59.966567       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0520 13:14:59.966594       1 main.go:250] Node multinode-865571-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [06e853ffdd1f323c7f1300e9222565318667e83e630a7c7103a7a488b13f8c6b] <==
	W0520 13:09:42.309235       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.309265       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.309295       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.311923       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.311992       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312022       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312050       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312089       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312119       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312144       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312171       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312197       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312224       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312254       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312282       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312310       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312336       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312366       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312502       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312545       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312575       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312604       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312633       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312811       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:09:42.312921       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c3686a85185284d6053a320903efea0d5d5ef7c565006981d619229d8dea0aab] <==
	I0520 13:11:26.928921       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:11:26.932073       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 13:11:26.932172       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 13:11:26.932195       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 13:11:26.932982       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:11:26.933966       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 13:11:26.934070       1 shared_informer.go:320] Caches are synced for configmaps
	E0520 13:11:26.941094       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 13:11:26.957199       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 13:11:26.957261       1 aggregator.go:165] initial CRD sync complete...
	I0520 13:11:26.957286       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 13:11:26.957309       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 13:11:26.957336       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:11:26.962637       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 13:11:26.965989       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:11:26.966068       1 policy_source.go:224] refreshing policies
	I0520 13:11:26.992523       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 13:11:27.835296       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 13:11:29.301653       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 13:11:29.495819       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 13:11:29.512176       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 13:11:29.581889       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 13:11:29.589604       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 13:11:40.171525       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 13:11:40.214262       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3d4a3b19bb8e909b6a7d500725b8492a06722a0a8ad04b2dd1af111516a285ed] <==
	I0520 13:12:07.994956       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m02" podCIDRs=["10.244.1.0/24"]
	I0520 13:12:09.883244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.661µs"
	I0520 13:12:09.920628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.974µs"
	I0520 13:12:09.933930       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.117µs"
	I0520 13:12:09.949000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.076µs"
	I0520 13:12:09.956058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.386µs"
	I0520 13:12:09.958261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.413µs"
	I0520 13:12:10.987850       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.843µs"
	I0520 13:12:14.811234       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:12:14.835151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.887µs"
	I0520 13:12:14.848606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.335µs"
	I0520 13:12:16.311161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.144139ms"
	I0520 13:12:16.312725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.626µs"
	I0520 13:12:32.985952       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:12:33.942186       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:12:33.943166       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m03\" does not exist"
	I0520 13:12:33.955949       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m03" podCIDRs=["10.244.2.0/24"]
	I0520 13:12:40.259959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:12:45.724493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:13:20.279246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.031079ms"
	I0520 13:13:20.280154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="163.568µs"
	I0520 13:13:39.957498       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-smmdf"
	I0520 13:13:39.979569       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-smmdf"
	I0520 13:13:39.979653       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-x2f5v"
	I0520 13:13:39.999769       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-x2f5v"
	
	
	==> kube-controller-manager [5e94c8b3558a8cdbcd0584808c8ae0b20e93e90e72bce6497f4d33b751455483] <==
	I0520 13:06:09.869135       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m02\" does not exist"
	I0520 13:06:09.889752       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m02" podCIDRs=["10.244.1.0/24"]
	I0520 13:06:13.815786       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-865571-m02"
	I0520 13:06:18.256482       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:06:20.458109       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.038122ms"
	I0520 13:06:20.488875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.605716ms"
	I0520 13:06:20.507048       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.591366ms"
	I0520 13:06:20.507317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.81µs"
	I0520 13:06:22.671227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.326535ms"
	I0520 13:06:22.672459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.269µs"
	I0520 13:06:22.952135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.528878ms"
	I0520 13:06:22.952599       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.285µs"
	I0520 13:06:52.825625       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m03\" does not exist"
	I0520 13:06:52.826040       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:06:52.842266       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m03" podCIDRs=["10.244.2.0/24"]
	I0520 13:06:53.829784       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-865571-m03"
	I0520 13:07:01.092247       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m03"
	I0520 13:07:29.437229       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:07:30.728768       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:07:30.730011       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-865571-m03\" does not exist"
	I0520 13:07:30.740288       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-865571-m03" podCIDRs=["10.244.3.0/24"]
	I0520 13:07:36.699305       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m02"
	I0520 13:08:13.879348       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-865571-m03"
	I0520 13:08:13.923634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.78526ms"
	I0520 13:08:13.924844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.739µs"
	
	
	==> kube-proxy [25ca0eed2cac1c583d143cab2bb82789ab514c597fbc00677a09ce5ab36a23e5] <==
	I0520 13:11:29.238794       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:11:29.263244       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.78"]
	I0520 13:11:29.321107       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:11:29.321159       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:11:29.321197       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:11:29.331906       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:11:29.332177       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:11:29.332223       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:11:29.334020       1 config.go:192] "Starting service config controller"
	I0520 13:11:29.334068       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:11:29.334093       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:11:29.334097       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:11:29.335262       1 config.go:319] "Starting node config controller"
	I0520 13:11:29.335336       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:11:29.434503       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:11:29.434610       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:11:29.435851       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ae13e8e8db5a4ee977f480ae52f237b1ffbe3e3e635d5dac77065e0b8f99239a] <==
	I0520 13:05:36.185761       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:05:36.194549       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.78"]
	I0520 13:05:36.229787       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:05:36.229872       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:05:36.229888       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:05:36.232708       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:05:36.233022       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:05:36.233055       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:05:36.234357       1 config.go:192] "Starting service config controller"
	I0520 13:05:36.234460       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:05:36.234489       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:05:36.234493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:05:36.235232       1 config.go:319] "Starting node config controller"
	I0520 13:05:36.235263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:05:36.335612       1 shared_informer.go:320] Caches are synced for node config
	I0520 13:05:36.335643       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:05:36.335696       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0332c5cdab59d65dca87fe6b32689f2e2868eb4c38fb04ac62e9bbc6c3c413f7] <==
	E0520 13:05:19.170562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 13:05:19.174277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 13:05:19.174571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 13:05:19.219646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 13:05:19.219804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 13:05:19.281620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:05:19.281935       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:05:19.317090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 13:05:19.318036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 13:05:19.408109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 13:05:19.408196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 13:05:19.427492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:05:19.427541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 13:05:19.438119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:05:19.438167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:05:19.482155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 13:05:19.482241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:05:19.482256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 13:05:19.482496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 13:05:19.498187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 13:05:19.498284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 13:05:19.508641       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 13:05:19.508694       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 13:05:22.700445       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 13:09:42.285905       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cf4d2cd83a9cd95929ebcca0c3ed3b469acae189ba7f75728ed2da0e736d02b1] <==
	I0520 13:11:25.077827       1 serving.go:380] Generated self-signed cert in-memory
	W0520 13:11:26.876721       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 13:11:26.876948       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 13:11:26.876981       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 13:11:26.877062       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 13:11:26.905245       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 13:11:26.905583       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:11:26.907832       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 13:11:26.908193       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 13:11:26.908312       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 13:11:26.908465       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 13:11:27.009472       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.118176    3079 topology_manager.go:215] "Topology Admit Handler" podUID="b9037bf4-865b-4ef6-8138-1a3c6a8d1500" podNamespace="kube-system" podName="storage-provisioner"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.118290    3079 topology_manager.go:215] "Topology Admit Handler" podUID="55131023-9fdc-4c5b-86f3-0963e13b54c2" podNamespace="default" podName="busybox-fc5497c4f-c8hj2"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.126917    3079 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.210791    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a05815a1-89f4-4adf-88f3-d85b1c969cd6-cni-cfg\") pod \"kindnet-p69ft\" (UID: \"a05815a1-89f4-4adf-88f3-d85b1c969cd6\") " pod="kube-system/kindnet-p69ft"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.210912    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a05815a1-89f4-4adf-88f3-d85b1c969cd6-xtables-lock\") pod \"kindnet-p69ft\" (UID: \"a05815a1-89f4-4adf-88f3-d85b1c969cd6\") " pod="kube-system/kindnet-p69ft"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.211074    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/826e8825-487e-4a9e-8a18-21245055c769-lib-modules\") pod \"kube-proxy-z8dbs\" (UID: \"826e8825-487e-4a9e-8a18-21245055c769\") " pod="kube-system/kube-proxy-z8dbs"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.211778    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a05815a1-89f4-4adf-88f3-d85b1c969cd6-lib-modules\") pod \"kindnet-p69ft\" (UID: \"a05815a1-89f4-4adf-88f3-d85b1c969cd6\") " pod="kube-system/kindnet-p69ft"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.211832    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/826e8825-487e-4a9e-8a18-21245055c769-xtables-lock\") pod \"kube-proxy-z8dbs\" (UID: \"826e8825-487e-4a9e-8a18-21245055c769\") " pod="kube-system/kube-proxy-z8dbs"
	May 20 13:11:28 multinode-865571 kubelet[3079]: I0520 13:11:28.211850    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b9037bf4-865b-4ef6-8138-1a3c6a8d1500-tmp\") pod \"storage-provisioner\" (UID: \"b9037bf4-865b-4ef6-8138-1a3c6a8d1500\") " pod="kube-system/storage-provisioner"
	May 20 13:11:31 multinode-865571 kubelet[3079]: I0520 13:11:31.029214    3079 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 20 13:12:23 multinode-865571 kubelet[3079]: E0520 13:12:23.212174    3079 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:12:23 multinode-865571 kubelet[3079]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:12:23 multinode-865571 kubelet[3079]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:12:23 multinode-865571 kubelet[3079]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:12:23 multinode-865571 kubelet[3079]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:13:23 multinode-865571 kubelet[3079]: E0520 13:13:23.208682    3079 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:13:23 multinode-865571 kubelet[3079]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:13:23 multinode-865571 kubelet[3079]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:13:23 multinode-865571 kubelet[3079]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:13:23 multinode-865571 kubelet[3079]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 13:14:23 multinode-865571 kubelet[3079]: E0520 13:14:23.209661    3079 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 13:14:23 multinode-865571 kubelet[3079]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 13:14:23 multinode-865571 kubelet[3079]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 13:14:23 multinode-865571 kubelet[3079]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 13:14:23 multinode-865571 kubelet[3079]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 13:15:06.340688  894455 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18932-852915/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-865571 -n multinode-865571
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-865571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                    
x
+
TestPreload (169.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-446349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-446349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.479303969s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-446349 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-446349 image pull gcr.io/k8s-minikube/busybox: (1.064904191s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-446349
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-446349: (7.286550074s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-446349 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0520 13:21:10.516080  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-446349 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.830276676s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-446349 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-05-20 13:21:37.28884032 +0000 UTC m=+5369.443402786
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-446349 -n test-preload-446349
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-446349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-446349 logs -n 25: (1.004074526s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571 sudo cat                                       | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m03_multinode-865571.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt                       | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m02:/home/docker/cp-test_multinode-865571-m03_multinode-865571-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n                                                                 | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | multinode-865571-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-865571 ssh -n multinode-865571-m02 sudo cat                                   | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-865571-m03_multinode-865571-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-865571 node stop m03                                                          | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	| node    | multinode-865571 node start                                                             | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC | 20 May 24 13:07 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-865571                                                                | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	| stop    | -p multinode-865571                                                                     | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:07 UTC |                     |
	| start   | -p multinode-865571                                                                     | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:09 UTC | 20 May 24 13:12 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-865571                                                                | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:12 UTC |                     |
	| node    | multinode-865571 node delete                                                            | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:12 UTC | 20 May 24 13:12 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-865571 stop                                                                   | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:12 UTC |                     |
	| start   | -p multinode-865571                                                                     | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:15 UTC | 20 May 24 13:18 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-865571                                                                | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:18 UTC |                     |
	| start   | -p multinode-865571-m02                                                                 | multinode-865571-m02 | jenkins | v1.33.1 | 20 May 24 13:18 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-865571-m03                                                                 | multinode-865571-m03 | jenkins | v1.33.1 | 20 May 24 13:18 UTC | 20 May 24 13:18 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-865571                                                                 | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:18 UTC |                     |
	| delete  | -p multinode-865571-m03                                                                 | multinode-865571-m03 | jenkins | v1.33.1 | 20 May 24 13:18 UTC | 20 May 24 13:18 UTC |
	| delete  | -p multinode-865571                                                                     | multinode-865571     | jenkins | v1.33.1 | 20 May 24 13:18 UTC | 20 May 24 13:18 UTC |
	| start   | -p test-preload-446349                                                                  | test-preload-446349  | jenkins | v1.33.1 | 20 May 24 13:18 UTC | 20 May 24 13:20 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-446349 image pull                                                          | test-preload-446349  | jenkins | v1.33.1 | 20 May 24 13:20 UTC | 20 May 24 13:20 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-446349                                                                  | test-preload-446349  | jenkins | v1.33.1 | 20 May 24 13:20 UTC | 20 May 24 13:20 UTC |
	| start   | -p test-preload-446349                                                                  | test-preload-446349  | jenkins | v1.33.1 | 20 May 24 13:20 UTC | 20 May 24 13:21 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-446349 image list                                                          | test-preload-446349  | jenkins | v1.33.1 | 20 May 24 13:21 UTC | 20 May 24 13:21 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:20:35
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:20:35.280084  897298 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:20:35.280182  897298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:20:35.280191  897298 out.go:304] Setting ErrFile to fd 2...
	I0520 13:20:35.280195  897298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:20:35.280352  897298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:20:35.280852  897298 out.go:298] Setting JSON to false
	I0520 13:20:35.281769  897298 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10983,"bootTime":1716200252,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:20:35.281828  897298 start.go:139] virtualization: kvm guest
	I0520 13:20:35.284325  897298 out.go:177] * [test-preload-446349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:20:35.285604  897298 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 13:20:35.287031  897298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:20:35.285645  897298 notify.go:220] Checking for updates...
	I0520 13:20:35.288670  897298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:20:35.289907  897298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:20:35.291093  897298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:20:35.292170  897298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:20:35.293590  897298 config.go:182] Loaded profile config "test-preload-446349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0520 13:20:35.293995  897298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:20:35.294046  897298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:20:35.308721  897298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0520 13:20:35.309094  897298 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:20:35.309589  897298 main.go:141] libmachine: Using API Version  1
	I0520 13:20:35.309611  897298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:20:35.309885  897298 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:20:35.310085  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:35.311922  897298 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 13:20:35.313216  897298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:20:35.313478  897298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:20:35.313514  897298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:20:35.327494  897298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0520 13:20:35.327809  897298 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:20:35.328238  897298 main.go:141] libmachine: Using API Version  1
	I0520 13:20:35.328257  897298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:20:35.328556  897298 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:20:35.328741  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:35.361547  897298 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:20:35.362719  897298 start.go:297] selected driver: kvm2
	I0520 13:20:35.362755  897298 start.go:901] validating driver "kvm2" against &{Name:test-preload-446349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-446349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:20:35.362905  897298 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:20:35.363629  897298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:20:35.363728  897298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:20:35.377466  897298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:20:35.377770  897298 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:20:35.377827  897298 cni.go:84] Creating CNI manager for ""
	I0520 13:20:35.377847  897298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:20:35.377916  897298 start.go:340] cluster config:
	{Name:test-preload-446349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-446349 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:20:35.378018  897298 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:20:35.379720  897298 out.go:177] * Starting "test-preload-446349" primary control-plane node in "test-preload-446349" cluster
	I0520 13:20:35.381205  897298 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0520 13:20:35.403130  897298 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0520 13:20:35.403162  897298 cache.go:56] Caching tarball of preloaded images
	I0520 13:20:35.403312  897298 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0520 13:20:35.405125  897298 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0520 13:20:35.406179  897298 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0520 13:20:35.432925  897298 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0520 13:20:38.314993  897298 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0520 13:20:38.315089  897298 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0520 13:20:39.186189  897298 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0520 13:20:39.186332  897298 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/config.json ...
	I0520 13:20:39.186546  897298 start.go:360] acquireMachinesLock for test-preload-446349: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:20:39.186617  897298 start.go:364] duration metric: took 48.304µs to acquireMachinesLock for "test-preload-446349"
	I0520 13:20:39.186639  897298 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:20:39.186649  897298 fix.go:54] fixHost starting: 
	I0520 13:20:39.187004  897298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:20:39.187047  897298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:20:39.201716  897298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43609
	I0520 13:20:39.202251  897298 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:20:39.202817  897298 main.go:141] libmachine: Using API Version  1
	I0520 13:20:39.202860  897298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:20:39.203161  897298 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:20:39.203341  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:39.203463  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetState
	I0520 13:20:39.204883  897298 fix.go:112] recreateIfNeeded on test-preload-446349: state=Stopped err=<nil>
	I0520 13:20:39.204932  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	W0520 13:20:39.205113  897298 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:20:39.207210  897298 out.go:177] * Restarting existing kvm2 VM for "test-preload-446349" ...
	I0520 13:20:39.208485  897298 main.go:141] libmachine: (test-preload-446349) Calling .Start
	I0520 13:20:39.208646  897298 main.go:141] libmachine: (test-preload-446349) Ensuring networks are active...
	I0520 13:20:39.209328  897298 main.go:141] libmachine: (test-preload-446349) Ensuring network default is active
	I0520 13:20:39.209708  897298 main.go:141] libmachine: (test-preload-446349) Ensuring network mk-test-preload-446349 is active
	I0520 13:20:39.210091  897298 main.go:141] libmachine: (test-preload-446349) Getting domain xml...
	I0520 13:20:39.210776  897298 main.go:141] libmachine: (test-preload-446349) Creating domain...
	I0520 13:20:40.390327  897298 main.go:141] libmachine: (test-preload-446349) Waiting to get IP...
	I0520 13:20:40.391496  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:40.391917  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:40.392016  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:40.391902  897349 retry.go:31] will retry after 220.897432ms: waiting for machine to come up
	I0520 13:20:40.614456  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:40.614989  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:40.615019  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:40.614948  897349 retry.go:31] will retry after 341.40785ms: waiting for machine to come up
	I0520 13:20:40.957455  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:40.957874  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:40.957907  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:40.957812  897349 retry.go:31] will retry after 293.664465ms: waiting for machine to come up
	I0520 13:20:41.254441  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:41.255248  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:41.255301  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:41.255189  897349 retry.go:31] will retry after 578.339793ms: waiting for machine to come up
	I0520 13:20:41.834818  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:41.835205  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:41.835231  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:41.835167  897349 retry.go:31] will retry after 549.509046ms: waiting for machine to come up
	I0520 13:20:42.386555  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:42.386949  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:42.386979  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:42.386908  897349 retry.go:31] will retry after 864.523036ms: waiting for machine to come up
	I0520 13:20:43.252917  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:43.253324  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:43.253349  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:43.253268  897349 retry.go:31] will retry after 758.188711ms: waiting for machine to come up
	I0520 13:20:44.013049  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:44.013587  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:44.013617  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:44.013530  897349 retry.go:31] will retry after 1.454733778s: waiting for machine to come up
	I0520 13:20:45.470759  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:45.471217  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:45.471245  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:45.471169  897349 retry.go:31] will retry after 1.76694685s: waiting for machine to come up
	I0520 13:20:47.240277  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:47.240644  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:47.240672  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:47.240604  897349 retry.go:31] will retry after 1.771839351s: waiting for machine to come up
	I0520 13:20:49.014939  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:49.015327  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:49.015360  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:49.015258  897349 retry.go:31] will retry after 2.587038735s: waiting for machine to come up
	I0520 13:20:51.605696  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:51.606088  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:51.606112  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:51.606007  897349 retry.go:31] will retry after 2.53653763s: waiting for machine to come up
	I0520 13:20:54.145624  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:54.146171  897298 main.go:141] libmachine: (test-preload-446349) DBG | unable to find current IP address of domain test-preload-446349 in network mk-test-preload-446349
	I0520 13:20:54.146226  897298 main.go:141] libmachine: (test-preload-446349) DBG | I0520 13:20:54.146140  897349 retry.go:31] will retry after 3.716914926s: waiting for machine to come up
	I0520 13:20:57.867344  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:57.867748  897298 main.go:141] libmachine: (test-preload-446349) Found IP for machine: 192.168.39.147
	I0520 13:20:57.867775  897298 main.go:141] libmachine: (test-preload-446349) Reserving static IP address...
	I0520 13:20:57.867792  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has current primary IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:57.868257  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "test-preload-446349", mac: "52:54:00:b7:ef:db", ip: "192.168.39.147"} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:57.868280  897298 main.go:141] libmachine: (test-preload-446349) DBG | skip adding static IP to network mk-test-preload-446349 - found existing host DHCP lease matching {name: "test-preload-446349", mac: "52:54:00:b7:ef:db", ip: "192.168.39.147"}
	I0520 13:20:57.868290  897298 main.go:141] libmachine: (test-preload-446349) Reserved static IP address: 192.168.39.147
	I0520 13:20:57.868304  897298 main.go:141] libmachine: (test-preload-446349) Waiting for SSH to be available...
	I0520 13:20:57.868319  897298 main.go:141] libmachine: (test-preload-446349) DBG | Getting to WaitForSSH function...
	I0520 13:20:57.870370  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:57.870678  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:57.870699  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:57.870835  897298 main.go:141] libmachine: (test-preload-446349) DBG | Using SSH client type: external
	I0520 13:20:57.870915  897298 main.go:141] libmachine: (test-preload-446349) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/test-preload-446349/id_rsa (-rw-------)
	I0520 13:20:57.870959  897298 main.go:141] libmachine: (test-preload-446349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/test-preload-446349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:20:57.870973  897298 main.go:141] libmachine: (test-preload-446349) DBG | About to run SSH command:
	I0520 13:20:57.870994  897298 main.go:141] libmachine: (test-preload-446349) DBG | exit 0
	I0520 13:20:57.990816  897298 main.go:141] libmachine: (test-preload-446349) DBG | SSH cmd err, output: <nil>: 
	I0520 13:20:57.991282  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetConfigRaw
	I0520 13:20:57.991999  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetIP
	I0520 13:20:57.994608  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:57.994975  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:57.995004  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:57.995265  897298 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/config.json ...
	I0520 13:20:57.995453  897298 machine.go:94] provisionDockerMachine start ...
	I0520 13:20:57.995471  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:57.995695  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:57.997882  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:57.998251  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:57.998279  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:57.998381  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:57.998556  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:57.998704  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:57.998823  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:57.998988  897298 main.go:141] libmachine: Using SSH client type: native
	I0520 13:20:57.999228  897298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0520 13:20:57.999241  897298 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:20:58.095171  897298 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 13:20:58.095208  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetMachineName
	I0520 13:20:58.095516  897298 buildroot.go:166] provisioning hostname "test-preload-446349"
	I0520 13:20:58.095546  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetMachineName
	I0520 13:20:58.095749  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:58.098367  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.098665  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:58.098701  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.098794  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:58.099009  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.099211  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.099351  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:58.099528  897298 main.go:141] libmachine: Using SSH client type: native
	I0520 13:20:58.099701  897298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0520 13:20:58.099713  897298 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-446349 && echo "test-preload-446349" | sudo tee /etc/hostname
	I0520 13:20:58.208475  897298 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-446349
	
	I0520 13:20:58.208504  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:58.211364  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.211700  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:58.211737  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.211946  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:58.212145  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.212329  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.212428  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:58.212588  897298 main.go:141] libmachine: Using SSH client type: native
	I0520 13:20:58.212867  897298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0520 13:20:58.212895  897298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-446349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-446349/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-446349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:20:58.315759  897298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:20:58.315803  897298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 13:20:58.315840  897298 buildroot.go:174] setting up certificates
	I0520 13:20:58.315853  897298 provision.go:84] configureAuth start
	I0520 13:20:58.315866  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetMachineName
	I0520 13:20:58.316184  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetIP
	I0520 13:20:58.318785  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.319111  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:58.319149  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.319256  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:58.321242  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.321482  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:58.321509  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.321668  897298 provision.go:143] copyHostCerts
	I0520 13:20:58.321735  897298 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 13:20:58.321765  897298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 13:20:58.321828  897298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 13:20:58.321933  897298 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 13:20:58.321944  897298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 13:20:58.321970  897298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 13:20:58.322027  897298 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 13:20:58.322034  897298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 13:20:58.322054  897298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 13:20:58.322101  897298 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.test-preload-446349 san=[127.0.0.1 192.168.39.147 localhost minikube test-preload-446349]
	I0520 13:20:58.428687  897298 provision.go:177] copyRemoteCerts
	I0520 13:20:58.428747  897298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:20:58.428791  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:58.431536  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.431823  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:58.431851  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.432091  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:58.432322  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.432479  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:58.432613  897298 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/test-preload-446349/id_rsa Username:docker}
	I0520 13:20:58.513208  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 13:20:58.537606  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 13:20:58.561269  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 13:20:58.584375  897298 provision.go:87] duration metric: took 268.508477ms to configureAuth
	I0520 13:20:58.584403  897298 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:20:58.584589  897298 config.go:182] Loaded profile config "test-preload-446349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0520 13:20:58.584698  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:58.587476  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.587794  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:58.587827  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.588005  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:58.588186  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.588373  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.588547  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:58.588723  897298 main.go:141] libmachine: Using SSH client type: native
	I0520 13:20:58.588943  897298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0520 13:20:58.588961  897298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:20:58.844068  897298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:20:58.844098  897298 machine.go:97] duration metric: took 848.632047ms to provisionDockerMachine
	I0520 13:20:58.844114  897298 start.go:293] postStartSetup for "test-preload-446349" (driver="kvm2")
	I0520 13:20:58.844146  897298 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:20:58.844180  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:58.844533  897298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:20:58.844590  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:58.847209  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.847583  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:58.847611  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.847756  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:58.847967  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.848160  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:58.848314  897298 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/test-preload-446349/id_rsa Username:docker}
	I0520 13:20:58.924979  897298 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:20:58.929274  897298 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:20:58.929294  897298 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 13:20:58.929362  897298 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 13:20:58.929429  897298 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 13:20:58.929510  897298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:20:58.938972  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:20:58.962504  897298 start.go:296] duration metric: took 118.375549ms for postStartSetup
	I0520 13:20:58.962551  897298 fix.go:56] duration metric: took 19.775897559s for fixHost
	I0520 13:20:58.962580  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:58.965265  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.965579  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:58.965604  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:58.965816  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:58.966034  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.966202  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:58.966428  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:58.966600  897298 main.go:141] libmachine: Using SSH client type: native
	I0520 13:20:58.966888  897298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0520 13:20:58.966910  897298 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:20:59.063655  897298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211259.031730772
	
	I0520 13:20:59.063682  897298 fix.go:216] guest clock: 1716211259.031730772
	I0520 13:20:59.063689  897298 fix.go:229] Guest: 2024-05-20 13:20:59.031730772 +0000 UTC Remote: 2024-05-20 13:20:58.962557486 +0000 UTC m=+23.716860823 (delta=69.173286ms)
	I0520 13:20:59.063709  897298 fix.go:200] guest clock delta is within tolerance: 69.173286ms
	I0520 13:20:59.063715  897298 start.go:83] releasing machines lock for "test-preload-446349", held for 19.877084084s
	I0520 13:20:59.063732  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:59.064013  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetIP
	I0520 13:20:59.066969  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:59.067361  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:59.067393  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:59.067582  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:59.068293  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:59.068494  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:20:59.068587  897298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:20:59.068618  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:59.068686  897298 ssh_runner.go:195] Run: cat /version.json
	I0520 13:20:59.068701  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:20:59.071588  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:59.071616  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:59.071958  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:59.071994  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:59.072024  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:20:59.072041  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:20:59.072079  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:59.072295  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:20:59.072336  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:59.072524  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:59.072527  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:20:59.072729  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:20:59.072732  897298 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/test-preload-446349/id_rsa Username:docker}
	I0520 13:20:59.072879  897298 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/test-preload-446349/id_rsa Username:docker}
	W0520 13:20:59.143470  897298 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:20:59.143593  897298 ssh_runner.go:195] Run: systemctl --version
	I0520 13:20:59.171389  897298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:20:59.312657  897298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:20:59.319747  897298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:20:59.319816  897298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:20:59.338955  897298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 13:20:59.338977  897298 start.go:494] detecting cgroup driver to use...
	I0520 13:20:59.339048  897298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:20:59.359323  897298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:20:59.375280  897298 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:20:59.375322  897298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:20:59.391140  897298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:20:59.407327  897298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:20:59.531853  897298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:20:59.697900  897298 docker.go:233] disabling docker service ...
	I0520 13:20:59.697997  897298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:20:59.712261  897298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:20:59.734495  897298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:20:59.857026  897298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:20:59.983070  897298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:20:59.996845  897298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:21:00.015062  897298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0520 13:21:00.015131  897298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:21:00.024981  897298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:21:00.025037  897298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:21:00.034979  897298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:21:00.044717  897298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:21:00.054596  897298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:21:00.064631  897298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:21:00.074292  897298 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:21:00.090731  897298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:21:00.100572  897298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:21:00.109589  897298 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 13:21:00.109637  897298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 13:21:00.121439  897298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:21:00.130169  897298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:21:00.245885  897298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:21:00.381511  897298 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:21:00.381600  897298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:21:00.386710  897298 start.go:562] Will wait 60s for crictl version
	I0520 13:21:00.386776  897298 ssh_runner.go:195] Run: which crictl
	I0520 13:21:00.390425  897298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:21:00.430004  897298 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:21:00.430083  897298 ssh_runner.go:195] Run: crio --version
	I0520 13:21:00.456705  897298 ssh_runner.go:195] Run: crio --version
	I0520 13:21:00.484418  897298 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0520 13:21:00.485682  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetIP
	I0520 13:21:00.488401  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:21:00.488678  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:21:00.488709  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:21:00.488902  897298 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 13:21:00.492711  897298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:21:00.504950  897298 kubeadm.go:877] updating cluster {Name:test-preload-446349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-446349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:21:00.505069  897298 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0520 13:21:00.505133  897298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:21:00.540762  897298 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0520 13:21:00.540822  897298 ssh_runner.go:195] Run: which lz4
	I0520 13:21:00.544648  897298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 13:21:00.548569  897298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 13:21:00.548593  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0520 13:21:02.138130  897298 crio.go:462] duration metric: took 1.593505006s to copy over tarball
	I0520 13:21:02.138218  897298 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 13:21:04.434414  897298 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.296163198s)
	I0520 13:21:04.434449  897298 crio.go:469] duration metric: took 2.29628432s to extract the tarball
	I0520 13:21:04.434457  897298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 13:21:04.475425  897298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:21:04.523579  897298 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0520 13:21:04.523606  897298 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 13:21:04.523676  897298 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:21:04.523701  897298 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 13:21:04.523717  897298 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 13:21:04.523729  897298 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 13:21:04.523759  897298 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 13:21:04.523788  897298 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 13:21:04.523785  897298 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 13:21:04.523676  897298 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 13:21:04.525288  897298 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 13:21:04.525330  897298 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 13:21:04.525340  897298 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 13:21:04.525288  897298 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:21:04.525351  897298 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 13:21:04.525290  897298 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 13:21:04.525288  897298 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 13:21:04.525290  897298 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 13:21:04.684929  897298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 13:21:04.691516  897298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0520 13:21:04.693433  897298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 13:21:04.698036  897298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0520 13:21:04.712136  897298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 13:21:04.736027  897298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0520 13:21:04.741997  897298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 13:21:04.764942  897298 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0520 13:21:04.764981  897298 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0520 13:21:04.765032  897298 ssh_runner.go:195] Run: which crictl
	I0520 13:21:04.771883  897298 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0520 13:21:04.771918  897298 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 13:21:04.771949  897298 ssh_runner.go:195] Run: which crictl
	I0520 13:21:04.814837  897298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:21:04.818880  897298 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0520 13:21:04.818917  897298 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 13:21:04.818957  897298 ssh_runner.go:195] Run: which crictl
	I0520 13:21:04.863799  897298 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0520 13:21:04.863845  897298 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 13:21:04.863893  897298 ssh_runner.go:195] Run: which crictl
	I0520 13:21:04.875773  897298 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0520 13:21:04.875819  897298 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 13:21:04.875824  897298 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0520 13:21:04.875849  897298 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0520 13:21:04.875857  897298 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 13:21:04.875869  897298 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 13:21:04.875878  897298 ssh_runner.go:195] Run: which crictl
	I0520 13:21:04.875897  897298 ssh_runner.go:195] Run: which crictl
	I0520 13:21:04.875904  897298 ssh_runner.go:195] Run: which crictl
	I0520 13:21:04.875961  897298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0520 13:21:04.875998  897298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0520 13:21:05.024944  897298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0520 13:21:05.025030  897298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0520 13:21:05.025084  897298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0520 13:21:05.025120  897298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 13:21:05.025034  897298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 13:21:05.025172  897298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0520 13:21:05.025231  897298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0520 13:21:05.025310  897298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0520 13:21:05.025310  897298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0520 13:21:05.105401  897298 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0520 13:21:05.105433  897298 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 13:21:05.105463  897298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0520 13:21:05.105487  897298 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0520 13:21:05.105534  897298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0520 13:21:05.105537  897298 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0520 13:21:05.140380  897298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0520 13:21:05.140426  897298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0520 13:21:05.140440  897298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0520 13:21:05.140494  897298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 13:21:05.140518  897298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0520 13:21:05.140527  897298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 13:21:05.140590  897298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0520 13:21:05.140530  897298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0520 13:21:08.902317  897298 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (3.796802125s)
	I0520 13:21:08.902365  897298 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0520 13:21:08.902381  897298 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.796824406s)
	I0520 13:21:08.902404  897298 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0520 13:21:08.902415  897298 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0520 13:21:08.902454  897298 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0520 13:21:08.902460  897298 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.761824517s)
	I0520 13:21:08.902484  897298 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0520 13:21:08.902504  897298 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (3.76189468s)
	I0520 13:21:08.902517  897298 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0520 13:21:08.902579  897298 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (3.762062615s)
	I0520 13:21:08.902609  897298 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0520 13:21:08.902630  897298 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.762096881s)
	I0520 13:21:08.902644  897298 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0520 13:21:09.644136  897298 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0520 13:21:09.644182  897298 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0520 13:21:09.644235  897298 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0520 13:21:10.486657  897298 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0520 13:21:10.486721  897298 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0520 13:21:10.486782  897298 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0520 13:21:10.933785  897298 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0520 13:21:10.933843  897298 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 13:21:10.933899  897298 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0520 13:21:11.271933  897298 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 13:21:11.271997  897298 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 13:21:11.272054  897298 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0520 13:21:13.417559  897298 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.14547898s)
	I0520 13:21:13.417597  897298 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0520 13:21:13.417621  897298 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0520 13:21:13.417666  897298 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0520 13:21:14.062566  897298 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0520 13:21:14.062626  897298 cache_images.go:123] Successfully loaded all cached images
	I0520 13:21:14.062634  897298 cache_images.go:92] duration metric: took 9.539015942s to LoadCachedImages
	I0520 13:21:14.062650  897298 kubeadm.go:928] updating node { 192.168.39.147 8443 v1.24.4 crio true true} ...
	I0520 13:21:14.062803  897298 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-446349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-446349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:21:14.062902  897298 ssh_runner.go:195] Run: crio config
	I0520 13:21:14.108439  897298 cni.go:84] Creating CNI manager for ""
	I0520 13:21:14.108462  897298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:21:14.108482  897298 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:21:14.108501  897298 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-446349 NodeName:test-preload-446349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:21:14.108658  897298 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-446349"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:21:14.108722  897298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0520 13:21:14.118671  897298 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:21:14.118742  897298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:21:14.127697  897298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0520 13:21:14.143771  897298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:21:14.159669  897298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0520 13:21:14.176297  897298 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0520 13:21:14.180077  897298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:21:14.191739  897298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:21:14.306256  897298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:21:14.323566  897298 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349 for IP: 192.168.39.147
	I0520 13:21:14.323598  897298 certs.go:194] generating shared ca certs ...
	I0520 13:21:14.323620  897298 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:21:14.323822  897298 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 13:21:14.323865  897298 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 13:21:14.323871  897298 certs.go:256] generating profile certs ...
	I0520 13:21:14.323973  897298 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/client.key
	I0520 13:21:14.324065  897298 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/apiserver.key.477f206f
	I0520 13:21:14.324114  897298 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/proxy-client.key
	I0520 13:21:14.324256  897298 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 13:21:14.324299  897298 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 13:21:14.324313  897298 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 13:21:14.324344  897298 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 13:21:14.324377  897298 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:21:14.324418  897298 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 13:21:14.324500  897298 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:21:14.325492  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:21:14.360948  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:21:14.411510  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:21:14.446521  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 13:21:14.472864  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 13:21:14.503590  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:21:14.528985  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:21:14.566449  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:21:14.590082  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 13:21:14.613175  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:21:14.635935  897298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 13:21:14.658158  897298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:21:14.674387  897298 ssh_runner.go:195] Run: openssl version
	I0520 13:21:14.680028  897298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 13:21:14.690585  897298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 13:21:14.694926  897298 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:21:14.694966  897298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 13:21:14.700676  897298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:21:14.711122  897298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:21:14.721777  897298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:21:14.725930  897298 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:21:14.725973  897298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:21:14.731490  897298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:21:14.741967  897298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 13:21:14.752417  897298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 13:21:14.756707  897298 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:21:14.756750  897298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 13:21:14.762432  897298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 13:21:14.772889  897298 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:21:14.777155  897298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:21:14.782794  897298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:21:14.788404  897298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:21:14.794062  897298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:21:14.799587  897298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:21:14.805098  897298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:21:14.810642  897298 kubeadm.go:391] StartCluster: {Name:test-preload-446349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-446349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:21:14.810729  897298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:21:14.810793  897298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:21:14.850991  897298 cri.go:89] found id: ""
	I0520 13:21:14.851097  897298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 13:21:14.862292  897298 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 13:21:14.862316  897298 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 13:21:14.862322  897298 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 13:21:14.862384  897298 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 13:21:14.872820  897298 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:21:14.873277  897298 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-446349" does not appear in /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:21:14.873407  897298 kubeconfig.go:62] /home/jenkins/minikube-integration/18932-852915/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-446349" cluster setting kubeconfig missing "test-preload-446349" context setting]
	I0520 13:21:14.873702  897298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/kubeconfig: {Name:mk53b7329389b23289bbec52de9b56d2ade0e6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:21:14.874288  897298 kapi.go:59] client config for test-preload-446349: &rest.Config{Host:"https://192.168.39.147:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/client.crt", KeyFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/client.key", CAFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 13:21:14.875022  897298 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 13:21:14.884645  897298 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.147
	I0520 13:21:14.884673  897298 kubeadm.go:1154] stopping kube-system containers ...
	I0520 13:21:14.884687  897298 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 13:21:14.884729  897298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:21:14.919892  897298 cri.go:89] found id: ""
	I0520 13:21:14.919954  897298 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 13:21:14.936642  897298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 13:21:14.946484  897298 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 13:21:14.946507  897298 kubeadm.go:156] found existing configuration files:
	
	I0520 13:21:14.946544  897298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 13:21:14.955663  897298 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 13:21:14.955718  897298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 13:21:14.965116  897298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 13:21:14.974026  897298 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 13:21:14.974072  897298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 13:21:14.983410  897298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 13:21:14.992213  897298 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 13:21:14.992261  897298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 13:21:15.001575  897298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 13:21:15.010454  897298 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 13:21:15.010506  897298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 13:21:15.020038  897298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 13:21:15.031354  897298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 13:21:15.124135  897298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 13:21:15.775782  897298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 13:21:16.050459  897298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 13:21:16.113971  897298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 13:21:16.192442  897298 api_server.go:52] waiting for apiserver process to appear ...
	I0520 13:21:16.192567  897298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:21:16.693608  897298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:21:17.192835  897298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:21:17.215922  897298 api_server.go:72] duration metric: took 1.023483609s to wait for apiserver process to appear ...
	I0520 13:21:17.215953  897298 api_server.go:88] waiting for apiserver healthz status ...
	I0520 13:21:17.215976  897298 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0520 13:21:17.216472  897298 api_server.go:269] stopped: https://192.168.39.147:8443/healthz: Get "https://192.168.39.147:8443/healthz": dial tcp 192.168.39.147:8443: connect: connection refused
	I0520 13:21:17.716246  897298 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0520 13:21:21.315145  897298 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 13:21:21.315176  897298 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 13:21:21.315193  897298 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0520 13:21:21.326097  897298 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 13:21:21.326121  897298 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 13:21:21.716673  897298 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0520 13:21:21.721583  897298 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 13:21:21.721618  897298 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 13:21:22.216153  897298 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0520 13:21:22.231464  897298 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 13:21:22.231500  897298 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 13:21:22.716058  897298 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0520 13:21:22.721172  897298 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0520 13:21:22.727323  897298 api_server.go:141] control plane version: v1.24.4
	I0520 13:21:22.727347  897298 api_server.go:131] duration metric: took 5.511387885s to wait for apiserver health ...
	I0520 13:21:22.727357  897298 cni.go:84] Creating CNI manager for ""
	I0520 13:21:22.727363  897298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:21:22.729242  897298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 13:21:22.730387  897298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 13:21:22.741563  897298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 13:21:22.758811  897298 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 13:21:22.767290  897298 system_pods.go:59] 7 kube-system pods found
	I0520 13:21:22.767315  897298 system_pods.go:61] "coredns-6d4b75cb6d-6b27n" [2afd2688-a776-4231-a1d0-4db5872302d2] Running
	I0520 13:21:22.767320  897298 system_pods.go:61] "etcd-test-preload-446349" [52513625-523e-4b3f-b07b-5d5c59acbca0] Running
	I0520 13:21:22.767324  897298 system_pods.go:61] "kube-apiserver-test-preload-446349" [338a8e86-e1da-4b07-af69-1487a2475430] Running
	I0520 13:21:22.767332  897298 system_pods.go:61] "kube-controller-manager-test-preload-446349" [cd297826-6788-418a-b039-a583a1bfa330] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 13:21:22.767341  897298 system_pods.go:61] "kube-proxy-8j7xb" [a72074a2-3fb1-408c-ab54-9735f008b857] Running
	I0520 13:21:22.767348  897298 system_pods.go:61] "kube-scheduler-test-preload-446349" [3b74dd51-38f0-40c4-a03e-4171f99b2fd2] Running
	I0520 13:21:22.767355  897298 system_pods.go:61] "storage-provisioner" [dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 13:21:22.767372  897298 system_pods.go:74] duration metric: took 8.538284ms to wait for pod list to return data ...
	I0520 13:21:22.767383  897298 node_conditions.go:102] verifying NodePressure condition ...
	I0520 13:21:22.770326  897298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:21:22.770358  897298 node_conditions.go:123] node cpu capacity is 2
	I0520 13:21:22.770373  897298 node_conditions.go:105] duration metric: took 2.985706ms to run NodePressure ...
	I0520 13:21:22.770397  897298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 13:21:23.040507  897298 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 13:21:23.045048  897298 kubeadm.go:733] kubelet initialised
	I0520 13:21:23.045066  897298 kubeadm.go:734] duration metric: took 4.526282ms waiting for restarted kubelet to initialise ...
	I0520 13:21:23.045073  897298 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:21:23.049860  897298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-6b27n" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:23.055100  897298 pod_ready.go:97] node "test-preload-446349" hosting pod "coredns-6d4b75cb6d-6b27n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.055128  897298 pod_ready.go:81] duration metric: took 5.239267ms for pod "coredns-6d4b75cb6d-6b27n" in "kube-system" namespace to be "Ready" ...
	E0520 13:21:23.055138  897298 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-446349" hosting pod "coredns-6d4b75cb6d-6b27n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.055147  897298 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:23.062123  897298 pod_ready.go:97] node "test-preload-446349" hosting pod "etcd-test-preload-446349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.062149  897298 pod_ready.go:81] duration metric: took 6.992256ms for pod "etcd-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	E0520 13:21:23.062159  897298 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-446349" hosting pod "etcd-test-preload-446349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.062167  897298 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:23.067416  897298 pod_ready.go:97] node "test-preload-446349" hosting pod "kube-apiserver-test-preload-446349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.067439  897298 pod_ready.go:81] duration metric: took 5.262762ms for pod "kube-apiserver-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	E0520 13:21:23.067446  897298 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-446349" hosting pod "kube-apiserver-test-preload-446349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.067452  897298 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:23.163505  897298 pod_ready.go:97] node "test-preload-446349" hosting pod "kube-controller-manager-test-preload-446349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.163545  897298 pod_ready.go:81] duration metric: took 96.08203ms for pod "kube-controller-manager-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	E0520 13:21:23.163570  897298 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-446349" hosting pod "kube-controller-manager-test-preload-446349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.163580  897298 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7xb" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:23.562367  897298 pod_ready.go:97] node "test-preload-446349" hosting pod "kube-proxy-8j7xb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.562401  897298 pod_ready.go:81] duration metric: took 398.809023ms for pod "kube-proxy-8j7xb" in "kube-system" namespace to be "Ready" ...
	E0520 13:21:23.562410  897298 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-446349" hosting pod "kube-proxy-8j7xb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.562417  897298 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:23.963132  897298 pod_ready.go:97] node "test-preload-446349" hosting pod "kube-scheduler-test-preload-446349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.963168  897298 pod_ready.go:81] duration metric: took 400.744046ms for pod "kube-scheduler-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	E0520 13:21:23.963178  897298 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-446349" hosting pod "kube-scheduler-test-preload-446349" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:23.963185  897298 pod_ready.go:38] duration metric: took 918.102536ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:21:23.963204  897298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 13:21:23.975641  897298 ops.go:34] apiserver oom_adj: -16
	I0520 13:21:23.975666  897298 kubeadm.go:591] duration metric: took 9.113336328s to restartPrimaryControlPlane
	I0520 13:21:23.975677  897298 kubeadm.go:393] duration metric: took 9.165040712s to StartCluster
	I0520 13:21:23.975707  897298 settings.go:142] acquiring lock: {Name:mk4281d9011919f2beed93cad1a6e2e67e70984f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:21:23.975801  897298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:21:23.976552  897298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/kubeconfig: {Name:mk53b7329389b23289bbec52de9b56d2ade0e6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:21:23.976788  897298 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:21:23.978609  897298 out.go:177] * Verifying Kubernetes components...
	I0520 13:21:23.976915  897298 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 13:21:23.977040  897298 config.go:182] Loaded profile config "test-preload-446349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0520 13:21:23.979879  897298 addons.go:69] Setting storage-provisioner=true in profile "test-preload-446349"
	I0520 13:21:23.979920  897298 addons.go:234] Setting addon storage-provisioner=true in "test-preload-446349"
	W0520 13:21:23.979929  897298 addons.go:243] addon storage-provisioner should already be in state true
	I0520 13:21:23.979964  897298 host.go:66] Checking if "test-preload-446349" exists ...
	I0520 13:21:23.979882  897298 addons.go:69] Setting default-storageclass=true in profile "test-preload-446349"
	I0520 13:21:23.980036  897298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-446349"
	I0520 13:21:23.979883  897298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:21:23.980340  897298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:21:23.980380  897298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:21:23.980456  897298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:21:23.980501  897298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:21:23.995947  897298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0520 13:21:23.995973  897298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I0520 13:21:23.996393  897298 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:21:23.996508  897298 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:21:23.996877  897298 main.go:141] libmachine: Using API Version  1
	I0520 13:21:23.996896  897298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:21:23.997031  897298 main.go:141] libmachine: Using API Version  1
	I0520 13:21:23.997053  897298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:21:23.997291  897298 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:21:23.997382  897298 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:21:23.997502  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetState
	I0520 13:21:23.997849  897298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:21:23.997891  897298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:21:23.999992  897298 kapi.go:59] client config for test-preload-446349: &rest.Config{Host:"https://192.168.39.147:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/client.crt", KeyFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/profiles/test-preload-446349/client.key", CAFile:"/home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 13:21:24.000257  897298 addons.go:234] Setting addon default-storageclass=true in "test-preload-446349"
	W0520 13:21:24.000274  897298 addons.go:243] addon default-storageclass should already be in state true
	I0520 13:21:24.000307  897298 host.go:66] Checking if "test-preload-446349" exists ...
	I0520 13:21:24.000756  897298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:21:24.000814  897298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:21:24.012656  897298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I0520 13:21:24.013114  897298 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:21:24.013679  897298 main.go:141] libmachine: Using API Version  1
	I0520 13:21:24.013708  897298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:21:24.014111  897298 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:21:24.014311  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetState
	I0520 13:21:24.015249  897298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I0520 13:21:24.015761  897298 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:21:24.016247  897298 main.go:141] libmachine: Using API Version  1
	I0520 13:21:24.016262  897298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:21:24.016313  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:21:24.018416  897298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:21:24.016567  897298 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:21:24.020011  897298 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 13:21:24.020031  897298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 13:21:24.020057  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:21:24.020278  897298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:21:24.020329  897298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:21:24.023001  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:21:24.023426  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:21:24.023457  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:21:24.023688  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:21:24.023878  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:21:24.024023  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:21:24.024165  897298 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/test-preload-446349/id_rsa Username:docker}
	I0520 13:21:24.035954  897298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0520 13:21:24.036431  897298 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:21:24.036870  897298 main.go:141] libmachine: Using API Version  1
	I0520 13:21:24.036883  897298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:21:24.037200  897298 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:21:24.037398  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetState
	I0520 13:21:24.038926  897298 main.go:141] libmachine: (test-preload-446349) Calling .DriverName
	I0520 13:21:24.039138  897298 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 13:21:24.039158  897298 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 13:21:24.039177  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHHostname
	I0520 13:21:24.041913  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:21:24.042301  897298 main.go:141] libmachine: (test-preload-446349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:ef:db", ip: ""} in network mk-test-preload-446349: {Iface:virbr1 ExpiryTime:2024-05-20 14:20:49 +0000 UTC Type:0 Mac:52:54:00:b7:ef:db Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:test-preload-446349 Clientid:01:52:54:00:b7:ef:db}
	I0520 13:21:24.042348  897298 main.go:141] libmachine: (test-preload-446349) DBG | domain test-preload-446349 has defined IP address 192.168.39.147 and MAC address 52:54:00:b7:ef:db in network mk-test-preload-446349
	I0520 13:21:24.042471  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHPort
	I0520 13:21:24.042658  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHKeyPath
	I0520 13:21:24.042836  897298 main.go:141] libmachine: (test-preload-446349) Calling .GetSSHUsername
	I0520 13:21:24.043026  897298 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/test-preload-446349/id_rsa Username:docker}
	I0520 13:21:24.151123  897298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:21:24.166840  897298 node_ready.go:35] waiting up to 6m0s for node "test-preload-446349" to be "Ready" ...
	I0520 13:21:24.295405  897298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 13:21:24.306684  897298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 13:21:25.204109  897298 main.go:141] libmachine: Making call to close driver server
	I0520 13:21:25.204141  897298 main.go:141] libmachine: (test-preload-446349) Calling .Close
	I0520 13:21:25.204152  897298 main.go:141] libmachine: Making call to close driver server
	I0520 13:21:25.204171  897298 main.go:141] libmachine: (test-preload-446349) Calling .Close
	I0520 13:21:25.204475  897298 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:21:25.204523  897298 main.go:141] libmachine: (test-preload-446349) DBG | Closing plugin on server side
	I0520 13:21:25.204533  897298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:21:25.204543  897298 main.go:141] libmachine: Making call to close driver server
	I0520 13:21:25.204551  897298 main.go:141] libmachine: (test-preload-446349) Calling .Close
	I0520 13:21:25.204575  897298 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:21:25.204584  897298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:21:25.204590  897298 main.go:141] libmachine: Making call to close driver server
	I0520 13:21:25.204551  897298 main.go:141] libmachine: (test-preload-446349) DBG | Closing plugin on server side
	I0520 13:21:25.204604  897298 main.go:141] libmachine: (test-preload-446349) Calling .Close
	I0520 13:21:25.204807  897298 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:21:25.204839  897298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:21:25.204851  897298 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:21:25.204852  897298 main.go:141] libmachine: (test-preload-446349) DBG | Closing plugin on server side
	I0520 13:21:25.204863  897298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:21:25.209953  897298 main.go:141] libmachine: Making call to close driver server
	I0520 13:21:25.209969  897298 main.go:141] libmachine: (test-preload-446349) Calling .Close
	I0520 13:21:25.210160  897298 main.go:141] libmachine: (test-preload-446349) DBG | Closing plugin on server side
	I0520 13:21:25.210186  897298 main.go:141] libmachine: Successfully made call to close driver server
	I0520 13:21:25.210208  897298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 13:21:25.212137  897298 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 13:21:25.213523  897298 addons.go:505] duration metric: took 1.236606983s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 13:21:26.171196  897298 node_ready.go:53] node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:28.670321  897298 node_ready.go:53] node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:31.170681  897298 node_ready.go:53] node "test-preload-446349" has status "Ready":"False"
	I0520 13:21:31.673865  897298 node_ready.go:49] node "test-preload-446349" has status "Ready":"True"
	I0520 13:21:31.673891  897298 node_ready.go:38] duration metric: took 7.507013444s for node "test-preload-446349" to be "Ready" ...
	I0520 13:21:31.673901  897298 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:21:31.678295  897298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-6b27n" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:31.682357  897298 pod_ready.go:92] pod "coredns-6d4b75cb6d-6b27n" in "kube-system" namespace has status "Ready":"True"
	I0520 13:21:31.682373  897298 pod_ready.go:81] duration metric: took 4.057873ms for pod "coredns-6d4b75cb6d-6b27n" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:31.682380  897298 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:33.696521  897298 pod_ready.go:102] pod "etcd-test-preload-446349" in "kube-system" namespace has status "Ready":"False"
	I0520 13:21:36.188873  897298 pod_ready.go:92] pod "etcd-test-preload-446349" in "kube-system" namespace has status "Ready":"True"
	I0520 13:21:36.188907  897298 pod_ready.go:81] duration metric: took 4.506513124s for pod "etcd-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.188920  897298 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.193382  897298 pod_ready.go:92] pod "kube-apiserver-test-preload-446349" in "kube-system" namespace has status "Ready":"True"
	I0520 13:21:36.193407  897298 pod_ready.go:81] duration metric: took 4.479865ms for pod "kube-apiserver-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.193418  897298 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.197200  897298 pod_ready.go:92] pod "kube-controller-manager-test-preload-446349" in "kube-system" namespace has status "Ready":"True"
	I0520 13:21:36.197221  897298 pod_ready.go:81] duration metric: took 3.796384ms for pod "kube-controller-manager-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.197232  897298 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7xb" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.200989  897298 pod_ready.go:92] pod "kube-proxy-8j7xb" in "kube-system" namespace has status "Ready":"True"
	I0520 13:21:36.201009  897298 pod_ready.go:81] duration metric: took 3.770645ms for pod "kube-proxy-8j7xb" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.201017  897298 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.204577  897298 pod_ready.go:92] pod "kube-scheduler-test-preload-446349" in "kube-system" namespace has status "Ready":"True"
	I0520 13:21:36.204593  897298 pod_ready.go:81] duration metric: took 3.570479ms for pod "kube-scheduler-test-preload-446349" in "kube-system" namespace to be "Ready" ...
	I0520 13:21:36.204602  897298 pod_ready.go:38] duration metric: took 4.530689067s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:21:36.204620  897298 api_server.go:52] waiting for apiserver process to appear ...
	I0520 13:21:36.204678  897298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:21:36.220588  897298 api_server.go:72] duration metric: took 12.243764103s to wait for apiserver process to appear ...
	I0520 13:21:36.220613  897298 api_server.go:88] waiting for apiserver healthz status ...
	I0520 13:21:36.220631  897298 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0520 13:21:36.225074  897298 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0520 13:21:36.225768  897298 api_server.go:141] control plane version: v1.24.4
	I0520 13:21:36.225786  897298 api_server.go:131] duration metric: took 5.166451ms to wait for apiserver health ...
	I0520 13:21:36.225792  897298 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 13:21:36.389331  897298 system_pods.go:59] 7 kube-system pods found
	I0520 13:21:36.389360  897298 system_pods.go:61] "coredns-6d4b75cb6d-6b27n" [2afd2688-a776-4231-a1d0-4db5872302d2] Running
	I0520 13:21:36.389364  897298 system_pods.go:61] "etcd-test-preload-446349" [52513625-523e-4b3f-b07b-5d5c59acbca0] Running
	I0520 13:21:36.389367  897298 system_pods.go:61] "kube-apiserver-test-preload-446349" [338a8e86-e1da-4b07-af69-1487a2475430] Running
	I0520 13:21:36.389371  897298 system_pods.go:61] "kube-controller-manager-test-preload-446349" [cd297826-6788-418a-b039-a583a1bfa330] Running
	I0520 13:21:36.389373  897298 system_pods.go:61] "kube-proxy-8j7xb" [a72074a2-3fb1-408c-ab54-9735f008b857] Running
	I0520 13:21:36.389376  897298 system_pods.go:61] "kube-scheduler-test-preload-446349" [3b74dd51-38f0-40c4-a03e-4171f99b2fd2] Running
	I0520 13:21:36.389385  897298 system_pods.go:61] "storage-provisioner" [dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf] Running
	I0520 13:21:36.389392  897298 system_pods.go:74] duration metric: took 163.593517ms to wait for pod list to return data ...
	I0520 13:21:36.389398  897298 default_sa.go:34] waiting for default service account to be created ...
	I0520 13:21:36.585397  897298 default_sa.go:45] found service account: "default"
	I0520 13:21:36.585425  897298 default_sa.go:55] duration metric: took 196.021864ms for default service account to be created ...
	I0520 13:21:36.585434  897298 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 13:21:36.789285  897298 system_pods.go:86] 7 kube-system pods found
	I0520 13:21:36.789313  897298 system_pods.go:89] "coredns-6d4b75cb6d-6b27n" [2afd2688-a776-4231-a1d0-4db5872302d2] Running
	I0520 13:21:36.789317  897298 system_pods.go:89] "etcd-test-preload-446349" [52513625-523e-4b3f-b07b-5d5c59acbca0] Running
	I0520 13:21:36.789321  897298 system_pods.go:89] "kube-apiserver-test-preload-446349" [338a8e86-e1da-4b07-af69-1487a2475430] Running
	I0520 13:21:36.789326  897298 system_pods.go:89] "kube-controller-manager-test-preload-446349" [cd297826-6788-418a-b039-a583a1bfa330] Running
	I0520 13:21:36.789330  897298 system_pods.go:89] "kube-proxy-8j7xb" [a72074a2-3fb1-408c-ab54-9735f008b857] Running
	I0520 13:21:36.789333  897298 system_pods.go:89] "kube-scheduler-test-preload-446349" [3b74dd51-38f0-40c4-a03e-4171f99b2fd2] Running
	I0520 13:21:36.789337  897298 system_pods.go:89] "storage-provisioner" [dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf] Running
	I0520 13:21:36.789344  897298 system_pods.go:126] duration metric: took 203.903851ms to wait for k8s-apps to be running ...
	I0520 13:21:36.789350  897298 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 13:21:36.789393  897298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:21:36.804387  897298 system_svc.go:56] duration metric: took 15.028223ms WaitForService to wait for kubelet
	I0520 13:21:36.804415  897298 kubeadm.go:576] duration metric: took 12.827594394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:21:36.804437  897298 node_conditions.go:102] verifying NodePressure condition ...
	I0520 13:21:36.985909  897298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:21:36.985935  897298 node_conditions.go:123] node cpu capacity is 2
	I0520 13:21:36.985955  897298 node_conditions.go:105] duration metric: took 181.50456ms to run NodePressure ...
	I0520 13:21:36.985970  897298 start.go:240] waiting for startup goroutines ...
	I0520 13:21:36.985980  897298 start.go:245] waiting for cluster config update ...
	I0520 13:21:36.985993  897298 start.go:254] writing updated cluster config ...
	I0520 13:21:36.986262  897298 ssh_runner.go:195] Run: rm -f paused
	I0520 13:21:37.033873  897298 start.go:600] kubectl: 1.30.1, cluster: 1.24.4 (minor skew: 6)
	I0520 13:21:37.035881  897298 out.go:177] 
	W0520 13:21:37.036989  897298 out.go:239] ! /usr/local/bin/kubectl is version 1.30.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0520 13:21:37.038058  897298 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0520 13:21:37.039173  897298 out.go:177] * Done! kubectl is now configured to use "test-preload-446349" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.886469625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211297886450131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9d8203c-b05c-4914-9e4e-baffea91c3be name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.887031783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59d0c81c-03bd-427a-a910-568a963c6652 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.887079153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59d0c81c-03bd-427a-a910-568a963c6652 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.887261445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c30d193150707b3f9b12396760512f8d249c5ebcbc4a51ca3d3ca386a51c4ff,PodSandboxId:60e9f7ac976723d2230470b63b82e5c081cd8c1fc1cbbcdb173d452c9c6b0449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716211290396640099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6b27n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afd2688-a776-4231-a1d0-4db5872302d2,},Annotations:map[string]string{io.kubernetes.container.hash: b6722277,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cda358ecc06bf44bd468d35aa491cac7bc4ec0137727150f2074ae4e386c3a,PodSandboxId:e113483258e594defdbe9ae7113c196ea16bc05d24900cb71a25f125861c748c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211283213182516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2e355798,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:277f4ff1caa9cee0b4bd0a526790f4b2b26a040ee10957fd50f0fcf0f93f74a3,PodSandboxId:90afebc98f9795ebc07e0d69ab64a10f18528c56d41d1e2f8c6c42b4eb6cb5ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716211282890128634,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7xb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7
2074a2-3fb1-408c-ab54-9735f008b857,},Annotations:map[string]string{io.kubernetes.container.hash: f128ecc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6f40de0d3dc6d5644627092a6832cecd79fe8098a45a78255492cec59c67d34,PodSandboxId:bde388f9144e365f9429cc8fa3573e2d526850c96914a02ff17898173c698ca6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716211276969721935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6978e8ee
13e1e618c11ddbcb3c07350,},Annotations:map[string]string{io.kubernetes.container.hash: 443a51b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30be4b5ee005dfaa2ee1705c76c99f2eb5ee17ba7d9286337ed578e21da9db3a,PodSandboxId:e053f3efc48fcb23eac5d2a51d15304b4b9ed2b1c4ab58db109b950c9416c530,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716211276922901312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068c1d9f19f5c765c1f6
46568e62fc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e726c11c1cd0d82fad82e29b89867c294745893fefeb8000b3800061ab3b9194,PodSandboxId:87bfea6809d69ab8c355bd3b5ca351d2569af8a2ddd1ada9a3435fdf33533564,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716211276910020709,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c07c
ba69f537caa0fc9969901db586e,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a64f593040cb3126801ea26d619120813b67c70d571f63c9a9f526a223a76c9,PodSandboxId:2198e3bb0405cf44b2ba612438405491b6ebb6a42c3fbfcde285459d879668f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716211276890283225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346aca613f263073e636cbeca315ac0a,},Annotations
:map[string]string{io.kubernetes.container.hash: 97c9ea8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59d0c81c-03bd-427a-a910-568a963c6652 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.923758377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce62060c-3248-4e65-8e1d-541e318a1d48 name=/runtime.v1.RuntimeService/Version
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.923823071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce62060c-3248-4e65-8e1d-541e318a1d48 name=/runtime.v1.RuntimeService/Version
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.925137828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bed87b86-1f77-43f6-8501-7aafa85d7215 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.925811265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211297925785271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bed87b86-1f77-43f6-8501-7aafa85d7215 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.926302364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=858377ae-4c10-4463-a5b7-00101497d189 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.926350846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=858377ae-4c10-4463-a5b7-00101497d189 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.926705461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c30d193150707b3f9b12396760512f8d249c5ebcbc4a51ca3d3ca386a51c4ff,PodSandboxId:60e9f7ac976723d2230470b63b82e5c081cd8c1fc1cbbcdb173d452c9c6b0449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716211290396640099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6b27n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afd2688-a776-4231-a1d0-4db5872302d2,},Annotations:map[string]string{io.kubernetes.container.hash: b6722277,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cda358ecc06bf44bd468d35aa491cac7bc4ec0137727150f2074ae4e386c3a,PodSandboxId:e113483258e594defdbe9ae7113c196ea16bc05d24900cb71a25f125861c748c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211283213182516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2e355798,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:277f4ff1caa9cee0b4bd0a526790f4b2b26a040ee10957fd50f0fcf0f93f74a3,PodSandboxId:90afebc98f9795ebc07e0d69ab64a10f18528c56d41d1e2f8c6c42b4eb6cb5ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716211282890128634,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7xb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7
2074a2-3fb1-408c-ab54-9735f008b857,},Annotations:map[string]string{io.kubernetes.container.hash: f128ecc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6f40de0d3dc6d5644627092a6832cecd79fe8098a45a78255492cec59c67d34,PodSandboxId:bde388f9144e365f9429cc8fa3573e2d526850c96914a02ff17898173c698ca6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716211276969721935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6978e8ee
13e1e618c11ddbcb3c07350,},Annotations:map[string]string{io.kubernetes.container.hash: 443a51b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30be4b5ee005dfaa2ee1705c76c99f2eb5ee17ba7d9286337ed578e21da9db3a,PodSandboxId:e053f3efc48fcb23eac5d2a51d15304b4b9ed2b1c4ab58db109b950c9416c530,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716211276922901312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068c1d9f19f5c765c1f6
46568e62fc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e726c11c1cd0d82fad82e29b89867c294745893fefeb8000b3800061ab3b9194,PodSandboxId:87bfea6809d69ab8c355bd3b5ca351d2569af8a2ddd1ada9a3435fdf33533564,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716211276910020709,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c07c
ba69f537caa0fc9969901db586e,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a64f593040cb3126801ea26d619120813b67c70d571f63c9a9f526a223a76c9,PodSandboxId:2198e3bb0405cf44b2ba612438405491b6ebb6a42c3fbfcde285459d879668f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716211276890283225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346aca613f263073e636cbeca315ac0a,},Annotations
:map[string]string{io.kubernetes.container.hash: 97c9ea8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=858377ae-4c10-4463-a5b7-00101497d189 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.961778635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a91bdc64-4355-4c1f-a342-8b0802b3759e name=/runtime.v1.RuntimeService/Version
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.961841594Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a91bdc64-4355-4c1f-a342-8b0802b3759e name=/runtime.v1.RuntimeService/Version
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.963273780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26437712-0cb3-472c-a502-6fc02aee3bc2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.963807467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211297963781664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26437712-0cb3-472c-a502-6fc02aee3bc2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.964352001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19bd3170-8953-4ab2-8c97-926d55b132ca name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.964398962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19bd3170-8953-4ab2-8c97-926d55b132ca name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.964707849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c30d193150707b3f9b12396760512f8d249c5ebcbc4a51ca3d3ca386a51c4ff,PodSandboxId:60e9f7ac976723d2230470b63b82e5c081cd8c1fc1cbbcdb173d452c9c6b0449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716211290396640099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6b27n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afd2688-a776-4231-a1d0-4db5872302d2,},Annotations:map[string]string{io.kubernetes.container.hash: b6722277,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cda358ecc06bf44bd468d35aa491cac7bc4ec0137727150f2074ae4e386c3a,PodSandboxId:e113483258e594defdbe9ae7113c196ea16bc05d24900cb71a25f125861c748c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211283213182516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2e355798,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:277f4ff1caa9cee0b4bd0a526790f4b2b26a040ee10957fd50f0fcf0f93f74a3,PodSandboxId:90afebc98f9795ebc07e0d69ab64a10f18528c56d41d1e2f8c6c42b4eb6cb5ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716211282890128634,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7xb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7
2074a2-3fb1-408c-ab54-9735f008b857,},Annotations:map[string]string{io.kubernetes.container.hash: f128ecc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6f40de0d3dc6d5644627092a6832cecd79fe8098a45a78255492cec59c67d34,PodSandboxId:bde388f9144e365f9429cc8fa3573e2d526850c96914a02ff17898173c698ca6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716211276969721935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6978e8ee
13e1e618c11ddbcb3c07350,},Annotations:map[string]string{io.kubernetes.container.hash: 443a51b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30be4b5ee005dfaa2ee1705c76c99f2eb5ee17ba7d9286337ed578e21da9db3a,PodSandboxId:e053f3efc48fcb23eac5d2a51d15304b4b9ed2b1c4ab58db109b950c9416c530,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716211276922901312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068c1d9f19f5c765c1f6
46568e62fc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e726c11c1cd0d82fad82e29b89867c294745893fefeb8000b3800061ab3b9194,PodSandboxId:87bfea6809d69ab8c355bd3b5ca351d2569af8a2ddd1ada9a3435fdf33533564,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716211276910020709,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c07c
ba69f537caa0fc9969901db586e,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a64f593040cb3126801ea26d619120813b67c70d571f63c9a9f526a223a76c9,PodSandboxId:2198e3bb0405cf44b2ba612438405491b6ebb6a42c3fbfcde285459d879668f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716211276890283225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346aca613f263073e636cbeca315ac0a,},Annotations
:map[string]string{io.kubernetes.container.hash: 97c9ea8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19bd3170-8953-4ab2-8c97-926d55b132ca name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.996344617Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=259096ab-426b-480c-9cec-653fc6b00c28 name=/runtime.v1.RuntimeService/Version
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.996406762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=259096ab-426b-480c-9cec-653fc6b00c28 name=/runtime.v1.RuntimeService/Version
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.997271556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a07045a-0c62-4cc0-adae-24c65cd951ba name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.997860267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211297997837812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a07045a-0c62-4cc0-adae-24c65cd951ba name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.998478226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5350d6e8-eb04-4a5e-bcc3-67e6e340caa7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.998654864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5350d6e8-eb04-4a5e-bcc3-67e6e340caa7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:21:37 test-preload-446349 crio[688]: time="2024-05-20 13:21:37.998825736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c30d193150707b3f9b12396760512f8d249c5ebcbc4a51ca3d3ca386a51c4ff,PodSandboxId:60e9f7ac976723d2230470b63b82e5c081cd8c1fc1cbbcdb173d452c9c6b0449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716211290396640099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6b27n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afd2688-a776-4231-a1d0-4db5872302d2,},Annotations:map[string]string{io.kubernetes.container.hash: b6722277,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cda358ecc06bf44bd468d35aa491cac7bc4ec0137727150f2074ae4e386c3a,PodSandboxId:e113483258e594defdbe9ae7113c196ea16bc05d24900cb71a25f125861c748c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211283213182516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 2e355798,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:277f4ff1caa9cee0b4bd0a526790f4b2b26a040ee10957fd50f0fcf0f93f74a3,PodSandboxId:90afebc98f9795ebc07e0d69ab64a10f18528c56d41d1e2f8c6c42b4eb6cb5ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716211282890128634,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7xb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7
2074a2-3fb1-408c-ab54-9735f008b857,},Annotations:map[string]string{io.kubernetes.container.hash: f128ecc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6f40de0d3dc6d5644627092a6832cecd79fe8098a45a78255492cec59c67d34,PodSandboxId:bde388f9144e365f9429cc8fa3573e2d526850c96914a02ff17898173c698ca6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716211276969721935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6978e8ee
13e1e618c11ddbcb3c07350,},Annotations:map[string]string{io.kubernetes.container.hash: 443a51b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30be4b5ee005dfaa2ee1705c76c99f2eb5ee17ba7d9286337ed578e21da9db3a,PodSandboxId:e053f3efc48fcb23eac5d2a51d15304b4b9ed2b1c4ab58db109b950c9416c530,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716211276922901312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068c1d9f19f5c765c1f6
46568e62fc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e726c11c1cd0d82fad82e29b89867c294745893fefeb8000b3800061ab3b9194,PodSandboxId:87bfea6809d69ab8c355bd3b5ca351d2569af8a2ddd1ada9a3435fdf33533564,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716211276910020709,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c07c
ba69f537caa0fc9969901db586e,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a64f593040cb3126801ea26d619120813b67c70d571f63c9a9f526a223a76c9,PodSandboxId:2198e3bb0405cf44b2ba612438405491b6ebb6a42c3fbfcde285459d879668f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716211276890283225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-446349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346aca613f263073e636cbeca315ac0a,},Annotations
:map[string]string{io.kubernetes.container.hash: 97c9ea8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5350d6e8-eb04-4a5e-bcc3-67e6e340caa7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c30d19315070       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   60e9f7ac97672       coredns-6d4b75cb6d-6b27n
	c9cda358ecc06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   e113483258e59       storage-provisioner
	277f4ff1caa9c       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   90afebc98f979       kube-proxy-8j7xb
	a6f40de0d3dc6       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   bde388f9144e3       kube-apiserver-test-preload-446349
	30be4b5ee005d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   e053f3efc48fc       kube-scheduler-test-preload-446349
	e726c11c1cd0d       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   87bfea6809d69       kube-controller-manager-test-preload-446349
	7a64f593040cb       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   2198e3bb0405c       etcd-test-preload-446349
	
	
	==> coredns [8c30d193150707b3f9b12396760512f8d249c5ebcbc4a51ca3d3ca386a51c4ff] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:44975 - 63014 "HINFO IN 7353762513201886056.4240613979063721202. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023750958s
	
	
	==> describe nodes <==
	Name:               test-preload-446349
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-446349
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=test-preload-446349
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_20_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:20:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-446349
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:21:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:21:31 +0000   Mon, 20 May 2024 13:20:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:21:31 +0000   Mon, 20 May 2024 13:20:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:21:31 +0000   Mon, 20 May 2024 13:20:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:21:31 +0000   Mon, 20 May 2024 13:21:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    test-preload-446349
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06154fe3826f46ae82ce646781add116
	  System UUID:                06154fe3-826f-46ae-82ce-646781add116
	  Boot ID:                    26e6e426-d8d3-48b0-98b8-782844effc4b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6b27n                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     79s
	  kube-system                 etcd-test-preload-446349                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         91s
	  kube-system                 kube-apiserver-test-preload-446349             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-test-preload-446349    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-8j7xb                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-test-preload-446349             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node test-preload-446349 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node test-preload-446349 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                kubelet          Node test-preload-446349 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                81s                kubelet          Node test-preload-446349 status is now: NodeReady
	  Normal  RegisteredNode           80s                node-controller  Node test-preload-446349 event: Registered Node test-preload-446349 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-446349 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-446349 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-446349 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-446349 event: Registered Node test-preload-446349 in Controller
	
	
	==> dmesg <==
	[May20 13:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051110] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040289] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.491086] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.400903] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.648771] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.457609] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.061466] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052380] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.214723] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.122772] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.263493] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[May20 13:21] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.060110] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.665823] systemd-fstab-generator[1082]: Ignoring "noauto" option for root device
	[  +4.380643] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.698003] systemd-fstab-generator[1704]: Ignoring "noauto" option for root device
	[  +6.170372] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [7a64f593040cb3126801ea26d619120813b67c70d571f63c9a9f526a223a76c9] <==
	{"level":"info","ts":"2024-05-20T13:21:17.347Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"c194f0f1585e7a7d","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-20T13:21:17.348Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-20T13:21:17.349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d switched to configuration voters=(13949038865233640061)"}
	{"level":"info","ts":"2024-05-20T13:21:17.349Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","added-peer-id":"c194f0f1585e7a7d","added-peer-peer-urls":["https://192.168.39.147:2380"]}
	{"level":"info","ts":"2024-05-20T13:21:17.349Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:21:17.349Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:21:17.355Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:21:17.359Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c194f0f1585e7a7d","initial-advertise-peer-urls":["https://192.168.39.147:2380"],"listen-peer-urls":["https://192.168.39.147:2380"],"advertise-client-urls":["https://192.168.39.147:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.147:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T13:21:17.359Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T13:21:17.358Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2024-05-20T13:21:17.359Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2024-05-20T13:21:18.895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T13:21:18.895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T13:21:18.895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgPreVoteResp from c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2024-05-20T13:21:18.895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T13:21:18.895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgVoteResp from c194f0f1585e7a7d at term 3"}
	{"level":"info","ts":"2024-05-20T13:21:18.895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became leader at term 3"}
	{"level":"info","ts":"2024-05-20T13:21:18.895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c194f0f1585e7a7d elected leader c194f0f1585e7a7d at term 3"}
	{"level":"info","ts":"2024-05-20T13:21:18.895Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"c194f0f1585e7a7d","local-member-attributes":"{Name:test-preload-446349 ClientURLs:[https://192.168.39.147:2379]}","request-path":"/0/members/c194f0f1585e7a7d/attributes","cluster-id":"582b8c8375119e1d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T13:21:18.897Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:21:18.897Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:21:18.898Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T13:21:18.899Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.147:2379"}
	{"level":"info","ts":"2024-05-20T13:21:18.908Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:21:18.908Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:21:38 up 0 min,  0 users,  load average: 0.96, 0.29, 0.10
	Linux test-preload-446349 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a6f40de0d3dc6d5644627092a6832cecd79fe8098a45a78255492cec59c67d34] <==
	I0520 13:21:21.255665       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0520 13:21:21.255707       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0520 13:21:21.255886       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 13:21:21.265990       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0520 13:21:21.279832       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 13:21:21.280593       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0520 13:21:21.280672       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0520 13:21:21.335421       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:21:21.350467       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 13:21:21.350830       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:21:21.355208       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0520 13:21:21.356575       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0520 13:21:21.357411       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:21:21.380731       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0520 13:21:21.417129       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0520 13:21:21.937906       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0520 13:21:22.253401       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 13:21:22.911096       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0520 13:21:22.922627       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0520 13:21:22.965250       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0520 13:21:23.003644       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 13:21:23.012966       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 13:21:23.273479       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0520 13:21:33.693487       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 13:21:33.742719       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e726c11c1cd0d82fad82e29b89867c294745893fefeb8000b3800061ab3b9194] <==
	W0520 13:21:33.703486       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-446349. Assuming now as a timestamp.
	I0520 13:21:33.703608       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0520 13:21:33.706264       1 shared_informer.go:262] Caches are synced for namespace
	I0520 13:21:33.713229       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0520 13:21:33.726405       1 shared_informer.go:262] Caches are synced for expand
	I0520 13:21:33.728271       1 shared_informer.go:262] Caches are synced for endpoint
	I0520 13:21:33.730016       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0520 13:21:33.735061       1 shared_informer.go:262] Caches are synced for cronjob
	I0520 13:21:33.742620       1 shared_informer.go:262] Caches are synced for node
	I0520 13:21:33.742668       1 range_allocator.go:173] Starting range CIDR allocator
	I0520 13:21:33.742691       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0520 13:21:33.742716       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0520 13:21:33.743254       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0520 13:21:33.748354       1 shared_informer.go:262] Caches are synced for job
	I0520 13:21:33.748460       1 shared_informer.go:262] Caches are synced for PVC protection
	I0520 13:21:33.805053       1 shared_informer.go:262] Caches are synced for persistent volume
	I0520 13:21:33.814364       1 shared_informer.go:262] Caches are synced for ephemeral
	I0520 13:21:33.824642       1 shared_informer.go:262] Caches are synced for attach detach
	I0520 13:21:33.833001       1 shared_informer.go:262] Caches are synced for stateful set
	I0520 13:21:33.909953       1 shared_informer.go:262] Caches are synced for daemon sets
	I0520 13:21:33.916615       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 13:21:33.967145       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 13:21:34.371181       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 13:21:34.394617       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 13:21:34.394753       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [277f4ff1caa9cee0b4bd0a526790f4b2b26a040ee10957fd50f0fcf0f93f74a3] <==
	I0520 13:21:23.197481       1 node.go:163] Successfully retrieved node IP: 192.168.39.147
	I0520 13:21:23.197892       1 server_others.go:138] "Detected node IP" address="192.168.39.147"
	I0520 13:21:23.198083       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0520 13:21:23.256664       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0520 13:21:23.256698       1 server_others.go:206] "Using iptables Proxier"
	I0520 13:21:23.257792       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0520 13:21:23.260827       1 server.go:661] "Version info" version="v1.24.4"
	I0520 13:21:23.260910       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:21:23.263423       1 config.go:317] "Starting service config controller"
	I0520 13:21:23.267064       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0520 13:21:23.267164       1 config.go:226] "Starting endpoint slice config controller"
	I0520 13:21:23.267187       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0520 13:21:23.269570       1 config.go:444] "Starting node config controller"
	I0520 13:21:23.269597       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0520 13:21:23.367894       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0520 13:21:23.367967       1 shared_informer.go:262] Caches are synced for service config
	I0520 13:21:23.369781       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [30be4b5ee005dfaa2ee1705c76c99f2eb5ee17ba7d9286337ed578e21da9db3a] <==
	I0520 13:21:18.142328       1 serving.go:348] Generated self-signed cert in-memory
	I0520 13:21:21.386936       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0520 13:21:21.388084       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:21:21.394442       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0520 13:21:21.394593       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0520 13:21:21.395335       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0520 13:21:21.396020       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 13:21:21.404625       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 13:21:21.404653       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 13:21:21.404672       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0520 13:21:21.404676       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0520 13:21:21.496333       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0520 13:21:21.505740       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0520 13:21:21.505805       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.159876    1089 apiserver.go:52] "Watching apiserver"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.165293    1089 topology_manager.go:200] "Topology Admit Handler"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.165408    1089 topology_manager.go:200] "Topology Admit Handler"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.165472    1089 topology_manager.go:200] "Topology Admit Handler"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.166714    1089 topology_manager.go:200] "Topology Admit Handler"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: E0520 13:21:22.168936    1089 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-6b27n" podUID=2afd2688-a776-4231-a1d0-4db5872302d2
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239065    1089 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a72074a2-3fb1-408c-ab54-9735f008b857-kube-proxy\") pod \"kube-proxy-8j7xb\" (UID: \"a72074a2-3fb1-408c-ab54-9735f008b857\") " pod="kube-system/kube-proxy-8j7xb"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239106    1089 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume\") pod \"coredns-6d4b75cb6d-6b27n\" (UID: \"2afd2688-a776-4231-a1d0-4db5872302d2\") " pod="kube-system/coredns-6d4b75cb6d-6b27n"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239128    1089 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf-tmp\") pod \"storage-provisioner\" (UID: \"dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf\") " pod="kube-system/storage-provisioner"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239150    1089 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7699w\" (UniqueName: \"kubernetes.io/projected/dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf-kube-api-access-7699w\") pod \"storage-provisioner\" (UID: \"dbd7b8c6-837f-4bd7-90bf-7e96e54f5dcf\") " pod="kube-system/storage-provisioner"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239167    1089 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8ph\" (UniqueName: \"kubernetes.io/projected/2afd2688-a776-4231-a1d0-4db5872302d2-kube-api-access-9c8ph\") pod \"coredns-6d4b75cb6d-6b27n\" (UID: \"2afd2688-a776-4231-a1d0-4db5872302d2\") " pod="kube-system/coredns-6d4b75cb6d-6b27n"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239185    1089 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a72074a2-3fb1-408c-ab54-9735f008b857-lib-modules\") pod \"kube-proxy-8j7xb\" (UID: \"a72074a2-3fb1-408c-ab54-9735f008b857\") " pod="kube-system/kube-proxy-8j7xb"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239202    1089 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flznf\" (UniqueName: \"kubernetes.io/projected/a72074a2-3fb1-408c-ab54-9735f008b857-kube-api-access-flznf\") pod \"kube-proxy-8j7xb\" (UID: \"a72074a2-3fb1-408c-ab54-9735f008b857\") " pod="kube-system/kube-proxy-8j7xb"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239221    1089 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a72074a2-3fb1-408c-ab54-9735f008b857-xtables-lock\") pod \"kube-proxy-8j7xb\" (UID: \"a72074a2-3fb1-408c-ab54-9735f008b857\") " pod="kube-system/kube-proxy-8j7xb"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.239235    1089 reconciler.go:159] "Reconciler: start to sync state"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: I0520 13:21:22.272708    1089 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9c34c08c-3b73-4903-8b1c-e67d7ab7c9fa path="/var/lib/kubelet/pods/9c34c08c-3b73-4903-8b1c-e67d7ab7c9fa/volumes"
	May 20 13:21:22 test-preload-446349 kubelet[1089]: E0520 13:21:22.343088    1089 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 20 13:21:22 test-preload-446349 kubelet[1089]: E0520 13:21:22.343362    1089 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume podName:2afd2688-a776-4231-a1d0-4db5872302d2 nodeName:}" failed. No retries permitted until 2024-05-20 13:21:22.843327243 +0000 UTC m=+6.810853699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume") pod "coredns-6d4b75cb6d-6b27n" (UID: "2afd2688-a776-4231-a1d0-4db5872302d2") : object "kube-system"/"coredns" not registered
	May 20 13:21:22 test-preload-446349 kubelet[1089]: E0520 13:21:22.848215    1089 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 20 13:21:22 test-preload-446349 kubelet[1089]: E0520 13:21:22.848280    1089 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume podName:2afd2688-a776-4231-a1d0-4db5872302d2 nodeName:}" failed. No retries permitted until 2024-05-20 13:21:23.848265408 +0000 UTC m=+7.815791849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume") pod "coredns-6d4b75cb6d-6b27n" (UID: "2afd2688-a776-4231-a1d0-4db5872302d2") : object "kube-system"/"coredns" not registered
	May 20 13:21:23 test-preload-446349 kubelet[1089]: E0520 13:21:23.858057    1089 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 20 13:21:23 test-preload-446349 kubelet[1089]: E0520 13:21:23.858624    1089 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume podName:2afd2688-a776-4231-a1d0-4db5872302d2 nodeName:}" failed. No retries permitted until 2024-05-20 13:21:25.858586392 +0000 UTC m=+9.826112846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume") pod "coredns-6d4b75cb6d-6b27n" (UID: "2afd2688-a776-4231-a1d0-4db5872302d2") : object "kube-system"/"coredns" not registered
	May 20 13:21:24 test-preload-446349 kubelet[1089]: E0520 13:21:24.265182    1089 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-6b27n" podUID=2afd2688-a776-4231-a1d0-4db5872302d2
	May 20 13:21:25 test-preload-446349 kubelet[1089]: E0520 13:21:25.879271    1089 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 20 13:21:25 test-preload-446349 kubelet[1089]: E0520 13:21:25.879785    1089 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume podName:2afd2688-a776-4231-a1d0-4db5872302d2 nodeName:}" failed. No retries permitted until 2024-05-20 13:21:29.879758255 +0000 UTC m=+13.847284698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2afd2688-a776-4231-a1d0-4db5872302d2-config-volume") pod "coredns-6d4b75cb6d-6b27n" (UID: "2afd2688-a776-4231-a1d0-4db5872302d2") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [c9cda358ecc06bf44bd468d35aa491cac7bc4ec0137727150f2074ae4e386c3a] <==
	I0520 13:21:23.305311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-446349 -n test-preload-446349
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-446349 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-446349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-446349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-446349: (1.131130372s)
--- FAIL: TestPreload (169.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (414.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m49.601225466s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-785943] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-785943" primary control-plane node in "kubernetes-upgrade-785943" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:23:29.637837  898831 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:23:29.638124  898831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:23:29.638134  898831 out.go:304] Setting ErrFile to fd 2...
	I0520 13:23:29.638139  898831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:23:29.638398  898831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:23:29.638988  898831 out.go:298] Setting JSON to false
	I0520 13:23:29.640651  898831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11158,"bootTime":1716200252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:23:29.640711  898831 start.go:139] virtualization: kvm guest
	I0520 13:23:29.643143  898831 out.go:177] * [kubernetes-upgrade-785943] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:23:29.646164  898831 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 13:23:29.645359  898831 notify.go:220] Checking for updates...
	I0520 13:23:29.648561  898831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:23:29.650650  898831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:23:29.652878  898831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:23:29.655965  898831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:23:29.658208  898831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:23:29.659827  898831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:23:29.700082  898831 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 13:23:29.701336  898831 start.go:297] selected driver: kvm2
	I0520 13:23:29.701359  898831 start.go:901] validating driver "kvm2" against <nil>
	I0520 13:23:29.701375  898831 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:23:29.702485  898831 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:23:29.718743  898831 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:23:29.735894  898831 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:23:29.735966  898831 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 13:23:29.736277  898831 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 13:23:29.736303  898831 cni.go:84] Creating CNI manager for ""
	I0520 13:23:29.736311  898831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:23:29.736319  898831 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 13:23:29.736385  898831 start.go:340] cluster config:
	{Name:kubernetes-upgrade-785943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-785943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:23:29.736516  898831 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:23:29.738452  898831 out.go:177] * Starting "kubernetes-upgrade-785943" primary control-plane node in "kubernetes-upgrade-785943" cluster
	I0520 13:23:29.739759  898831 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 13:23:29.739817  898831 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 13:23:29.739828  898831 cache.go:56] Caching tarball of preloaded images
	I0520 13:23:29.739930  898831 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:23:29.739944  898831 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0520 13:23:29.740396  898831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/config.json ...
	I0520 13:23:29.740434  898831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/config.json: {Name:mk9c7f9a6dd8143f73d269327c27192b5aeea58b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:23:29.740591  898831 start.go:360] acquireMachinesLock for kubernetes-upgrade-785943: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:23:52.003710  898831 start.go:364] duration metric: took 22.263091912s to acquireMachinesLock for "kubernetes-upgrade-785943"
	I0520 13:23:52.003781  898831 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-785943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-785943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:23:52.003886  898831 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 13:23:52.006155  898831 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 13:23:52.006362  898831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:23:52.006426  898831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:23:52.024345  898831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0520 13:23:52.024720  898831 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:23:52.025284  898831 main.go:141] libmachine: Using API Version  1
	I0520 13:23:52.025303  898831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:23:52.025717  898831 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:23:52.025957  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetMachineName
	I0520 13:23:52.026138  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:23:52.026326  898831 start.go:159] libmachine.API.Create for "kubernetes-upgrade-785943" (driver="kvm2")
	I0520 13:23:52.026355  898831 client.go:168] LocalClient.Create starting
	I0520 13:23:52.026383  898831 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 13:23:52.026419  898831 main.go:141] libmachine: Decoding PEM data...
	I0520 13:23:52.026435  898831 main.go:141] libmachine: Parsing certificate...
	I0520 13:23:52.026486  898831 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 13:23:52.026503  898831 main.go:141] libmachine: Decoding PEM data...
	I0520 13:23:52.026521  898831 main.go:141] libmachine: Parsing certificate...
	I0520 13:23:52.026537  898831 main.go:141] libmachine: Running pre-create checks...
	I0520 13:23:52.026550  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .PreCreateCheck
	I0520 13:23:52.026932  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetConfigRaw
	I0520 13:23:52.027356  898831 main.go:141] libmachine: Creating machine...
	I0520 13:23:52.027370  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .Create
	I0520 13:23:52.027508  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Creating KVM machine...
	I0520 13:23:52.028450  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found existing default KVM network
	I0520 13:23:52.029500  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:52.029307  899149 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5e:42:9d} reservation:<nil>}
	I0520 13:23:52.030349  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:52.030272  899149 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000256330}
	I0520 13:23:52.030392  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | created network xml: 
	I0520 13:23:52.030427  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | <network>
	I0520 13:23:52.030444  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |   <name>mk-kubernetes-upgrade-785943</name>
	I0520 13:23:52.030453  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |   <dns enable='no'/>
	I0520 13:23:52.030464  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |   
	I0520 13:23:52.030477  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0520 13:23:52.030489  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |     <dhcp>
	I0520 13:23:52.030504  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0520 13:23:52.030517  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |     </dhcp>
	I0520 13:23:52.030528  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |   </ip>
	I0520 13:23:52.030550  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG |   
	I0520 13:23:52.030561  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | </network>
	I0520 13:23:52.030600  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | 
	I0520 13:23:52.035334  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | trying to create private KVM network mk-kubernetes-upgrade-785943 192.168.50.0/24...
	I0520 13:23:52.102117  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | private KVM network mk-kubernetes-upgrade-785943 192.168.50.0/24 created
	I0520 13:23:52.102156  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943 ...
	I0520 13:23:52.102176  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:23:52.102189  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:52.102136  899149 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:23:52.102408  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:23:52.351918  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:52.351778  899149 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa...
	I0520 13:23:52.404792  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:52.404613  899149 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/kubernetes-upgrade-785943.rawdisk...
	I0520 13:23:52.404834  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Writing magic tar header
	I0520 13:23:52.404855  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Writing SSH key tar header
	I0520 13:23:52.404870  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:52.404732  899149 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943 ...
	I0520 13:23:52.404885  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943 (perms=drwx------)
	I0520 13:23:52.404910  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:23:52.404931  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943
	I0520 13:23:52.404947  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 13:23:52.404962  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 13:23:52.404980  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:23:52.404994  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 13:23:52.405017  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 13:23:52.405034  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:23:52.405050  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:23:52.405061  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:23:52.405072  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:23:52.405081  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Checking permissions on dir: /home
	I0520 13:23:52.405092  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Skipping /home - not owner
	I0520 13:23:52.405106  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Creating domain...
	I0520 13:23:52.406298  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) define libvirt domain using xml: 
	I0520 13:23:52.406327  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) <domain type='kvm'>
	I0520 13:23:52.406354  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   <name>kubernetes-upgrade-785943</name>
	I0520 13:23:52.406369  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   <memory unit='MiB'>2200</memory>
	I0520 13:23:52.406381  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   <vcpu>2</vcpu>
	I0520 13:23:52.406391  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   <features>
	I0520 13:23:52.406401  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <acpi/>
	I0520 13:23:52.406411  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <apic/>
	I0520 13:23:52.406424  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <pae/>
	I0520 13:23:52.406434  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     
	I0520 13:23:52.406468  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   </features>
	I0520 13:23:52.406495  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   <cpu mode='host-passthrough'>
	I0520 13:23:52.406507  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   
	I0520 13:23:52.406517  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   </cpu>
	I0520 13:23:52.406528  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   <os>
	I0520 13:23:52.406539  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <type>hvm</type>
	I0520 13:23:52.406549  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <boot dev='cdrom'/>
	I0520 13:23:52.406557  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <boot dev='hd'/>
	I0520 13:23:52.406591  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <bootmenu enable='no'/>
	I0520 13:23:52.406635  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   </os>
	I0520 13:23:52.406650  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   <devices>
	I0520 13:23:52.406662  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <disk type='file' device='cdrom'>
	I0520 13:23:52.406678  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/boot2docker.iso'/>
	I0520 13:23:52.406690  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <target dev='hdc' bus='scsi'/>
	I0520 13:23:52.406703  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <readonly/>
	I0520 13:23:52.406716  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     </disk>
	I0520 13:23:52.406728  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <disk type='file' device='disk'>
	I0520 13:23:52.406747  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:23:52.406763  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/kubernetes-upgrade-785943.rawdisk'/>
	I0520 13:23:52.406783  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <target dev='hda' bus='virtio'/>
	I0520 13:23:52.406793  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     </disk>
	I0520 13:23:52.406809  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <interface type='network'>
	I0520 13:23:52.406825  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <source network='mk-kubernetes-upgrade-785943'/>
	I0520 13:23:52.406860  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <model type='virtio'/>
	I0520 13:23:52.406884  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     </interface>
	I0520 13:23:52.406895  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <interface type='network'>
	I0520 13:23:52.406906  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <source network='default'/>
	I0520 13:23:52.406919  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <model type='virtio'/>
	I0520 13:23:52.406940  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     </interface>
	I0520 13:23:52.406969  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <serial type='pty'>
	I0520 13:23:52.406991  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <target port='0'/>
	I0520 13:23:52.407009  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     </serial>
	I0520 13:23:52.407019  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <console type='pty'>
	I0520 13:23:52.407029  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <target type='serial' port='0'/>
	I0520 13:23:52.407039  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     </console>
	I0520 13:23:52.407051  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     <rng model='virtio'>
	I0520 13:23:52.407063  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)       <backend model='random'>/dev/random</backend>
	I0520 13:23:52.407072  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     </rng>
	I0520 13:23:52.407086  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     
	I0520 13:23:52.407098  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)     
	I0520 13:23:52.407107  898831 main.go:141] libmachine: (kubernetes-upgrade-785943)   </devices>
	I0520 13:23:52.407116  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) </domain>
	I0520 13:23:52.407126  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) 
	I0520 13:23:52.413952  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:f1:48:22 in network default
	I0520 13:23:52.414760  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:52.414784  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Ensuring networks are active...
	I0520 13:23:52.415478  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Ensuring network default is active
	I0520 13:23:52.415926  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Ensuring network mk-kubernetes-upgrade-785943 is active
	I0520 13:23:52.416458  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Getting domain xml...
	I0520 13:23:52.417581  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Creating domain...
	I0520 13:23:53.660364  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Waiting to get IP...
	I0520 13:23:53.661142  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:53.661607  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:53.661635  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:53.661593  899149 retry.go:31] will retry after 246.190785ms: waiting for machine to come up
	I0520 13:23:53.909131  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:53.909732  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:53.909763  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:53.909685  899149 retry.go:31] will retry after 374.542006ms: waiting for machine to come up
	I0520 13:23:54.286556  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:54.286972  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:54.287009  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:54.286927  899149 retry.go:31] will retry after 464.227625ms: waiting for machine to come up
	I0520 13:23:54.752566  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:54.753053  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:54.753082  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:54.753005  899149 retry.go:31] will retry after 484.275776ms: waiting for machine to come up
	I0520 13:23:55.238840  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:55.239340  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:55.239363  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:55.239297  899149 retry.go:31] will retry after 630.108064ms: waiting for machine to come up
	I0520 13:23:55.871303  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:55.871903  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:55.871942  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:55.871835  899149 retry.go:31] will retry after 583.908552ms: waiting for machine to come up
	I0520 13:23:56.457705  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:56.458102  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:56.458131  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:56.458050  899149 retry.go:31] will retry after 1.133477615s: waiting for machine to come up
	I0520 13:23:57.592959  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:57.593506  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:57.593533  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:57.593440  899149 retry.go:31] will retry after 1.289612133s: waiting for machine to come up
	I0520 13:23:58.885190  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:23:58.885676  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:23:58.885698  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:23:58.885627  899149 retry.go:31] will retry after 1.516270746s: waiting for machine to come up
	I0520 13:24:00.404075  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:00.404492  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:24:00.404520  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:24:00.404460  899149 retry.go:31] will retry after 1.711715591s: waiting for machine to come up
	I0520 13:24:02.117797  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:02.118285  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:24:02.118323  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:24:02.118215  899149 retry.go:31] will retry after 2.746769414s: waiting for machine to come up
	I0520 13:24:04.866332  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:04.866772  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:24:04.866826  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:24:04.866714  899149 retry.go:31] will retry after 3.324663365s: waiting for machine to come up
	I0520 13:24:08.192712  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:08.193057  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find current IP address of domain kubernetes-upgrade-785943 in network mk-kubernetes-upgrade-785943
	I0520 13:24:08.193088  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | I0520 13:24:08.193001  899149 retry.go:31] will retry after 4.531441828s: waiting for machine to come up
	I0520 13:24:12.725491  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:12.725925  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has current primary IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:12.725946  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Found IP for machine: 192.168.50.63
	I0520 13:24:12.725959  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Reserving static IP address...
	I0520 13:24:12.726339  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-785943", mac: "52:54:00:62:f4:b5", ip: "192.168.50.63"} in network mk-kubernetes-upgrade-785943
	I0520 13:24:12.800544  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Getting to WaitForSSH function...
	I0520 13:24:12.800575  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Reserved static IP address: 192.168.50.63
	I0520 13:24:12.800588  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Waiting for SSH to be available...
	I0520 13:24:12.803254  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:12.803696  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:12.803726  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:12.803807  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Using SSH client type: external
	I0520 13:24:12.803835  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Using SSH private key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa (-rw-------)
	I0520 13:24:12.803863  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 13:24:12.803878  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | About to run SSH command:
	I0520 13:24:12.803895  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | exit 0
	I0520 13:24:12.934616  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | SSH cmd err, output: <nil>: 
	I0520 13:24:12.934951  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) KVM machine creation complete!
	I0520 13:24:12.935251  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetConfigRaw
	I0520 13:24:12.935820  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:24:12.936020  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:24:12.936234  898831 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 13:24:12.936252  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetState
	I0520 13:24:12.937444  898831 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 13:24:12.937461  898831 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 13:24:12.937470  898831 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 13:24:12.937494  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:12.939855  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:12.940233  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:12.940263  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:12.940401  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:12.940575  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:12.940712  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:12.940849  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:12.940985  898831 main.go:141] libmachine: Using SSH client type: native
	I0520 13:24:12.941185  898831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0520 13:24:12.941195  898831 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 13:24:13.049926  898831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:24:13.049954  898831 main.go:141] libmachine: Detecting the provisioner...
	I0520 13:24:13.049964  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:13.052888  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.053335  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:13.053366  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.053533  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:13.053740  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.053941  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.054129  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:13.054335  898831 main.go:141] libmachine: Using SSH client type: native
	I0520 13:24:13.054512  898831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0520 13:24:13.054522  898831 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 13:24:13.163919  898831 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 13:24:13.164030  898831 main.go:141] libmachine: found compatible host: buildroot
	I0520 13:24:13.164045  898831 main.go:141] libmachine: Provisioning with buildroot...
	I0520 13:24:13.164058  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetMachineName
	I0520 13:24:13.164357  898831 buildroot.go:166] provisioning hostname "kubernetes-upgrade-785943"
	I0520 13:24:13.164385  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetMachineName
	I0520 13:24:13.164661  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:13.167293  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.167639  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:13.167665  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.167822  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:13.168014  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.168188  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.168311  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:13.168472  898831 main.go:141] libmachine: Using SSH client type: native
	I0520 13:24:13.168668  898831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0520 13:24:13.168686  898831 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-785943 && echo "kubernetes-upgrade-785943" | sudo tee /etc/hostname
	I0520 13:24:13.293237  898831 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-785943
	
	I0520 13:24:13.293263  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:13.296228  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.296566  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:13.296603  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.296764  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:13.296979  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.297158  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.297316  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:13.297466  898831 main.go:141] libmachine: Using SSH client type: native
	I0520 13:24:13.297640  898831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0520 13:24:13.297659  898831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-785943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-785943/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-785943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:24:13.411201  898831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:24:13.411236  898831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 13:24:13.411288  898831 buildroot.go:174] setting up certificates
	I0520 13:24:13.411330  898831 provision.go:84] configureAuth start
	I0520 13:24:13.411350  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetMachineName
	I0520 13:24:13.411653  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetIP
	I0520 13:24:13.415619  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.416042  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:13.416080  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.416218  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:13.418390  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.418691  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:13.418718  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.418920  898831 provision.go:143] copyHostCerts
	I0520 13:24:13.418983  898831 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 13:24:13.419002  898831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 13:24:13.419063  898831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 13:24:13.419182  898831 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 13:24:13.419193  898831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 13:24:13.419217  898831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 13:24:13.419291  898831 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 13:24:13.419299  898831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 13:24:13.419322  898831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 13:24:13.419403  898831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-785943 san=[127.0.0.1 192.168.50.63 kubernetes-upgrade-785943 localhost minikube]
	I0520 13:24:13.710628  898831 provision.go:177] copyRemoteCerts
	I0520 13:24:13.710722  898831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:24:13.710782  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:13.713976  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.714301  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:13.714324  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.714533  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:13.714729  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.714897  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:13.715040  898831 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa Username:docker}
	I0520 13:24:13.800728  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 13:24:13.824405  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0520 13:24:13.847320  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 13:24:13.871833  898831 provision.go:87] duration metric: took 460.48505ms to configureAuth
	I0520 13:24:13.871863  898831 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:24:13.872086  898831 config.go:182] Loaded profile config "kubernetes-upgrade-785943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 13:24:13.872177  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:13.875077  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.875437  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:13.875471  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:13.875644  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:13.875842  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.876038  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:13.876186  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:13.876340  898831 main.go:141] libmachine: Using SSH client type: native
	I0520 13:24:13.876499  898831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0520 13:24:13.876512  898831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:24:14.143499  898831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:24:14.143543  898831 main.go:141] libmachine: Checking connection to Docker...
	I0520 13:24:14.143556  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetURL
	I0520 13:24:14.144889  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | Using libvirt version 6000000
	I0520 13:24:14.147211  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.147550  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:14.147585  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.147766  898831 main.go:141] libmachine: Docker is up and running!
	I0520 13:24:14.147779  898831 main.go:141] libmachine: Reticulating splines...
	I0520 13:24:14.147792  898831 client.go:171] duration metric: took 22.121423945s to LocalClient.Create
	I0520 13:24:14.147822  898831 start.go:167] duration metric: took 22.121497754s to libmachine.API.Create "kubernetes-upgrade-785943"
	I0520 13:24:14.147835  898831 start.go:293] postStartSetup for "kubernetes-upgrade-785943" (driver="kvm2")
	I0520 13:24:14.147850  898831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:24:14.147867  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:24:14.148104  898831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:24:14.148155  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:14.150378  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.150709  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:14.150731  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.150912  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:14.151097  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:14.151298  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:14.151474  898831 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa Username:docker}
	I0520 13:24:14.237886  898831 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:24:14.242177  898831 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:24:14.242201  898831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 13:24:14.242259  898831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 13:24:14.242342  898831 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 13:24:14.242434  898831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:24:14.251805  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:24:14.275790  898831 start.go:296] duration metric: took 127.938516ms for postStartSetup
	I0520 13:24:14.275838  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetConfigRaw
	I0520 13:24:14.276451  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetIP
	I0520 13:24:14.278916  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.279351  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:14.279382  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.279586  898831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/config.json ...
	I0520 13:24:14.279780  898831 start.go:128] duration metric: took 22.275881175s to createHost
	I0520 13:24:14.279803  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:14.281984  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.282253  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:14.282284  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.282382  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:14.282583  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:14.282755  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:14.282900  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:14.283070  898831 main.go:141] libmachine: Using SSH client type: native
	I0520 13:24:14.283230  898831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0520 13:24:14.283240  898831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 13:24:14.391214  898831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211454.364745911
	
	I0520 13:24:14.391242  898831 fix.go:216] guest clock: 1716211454.364745911
	I0520 13:24:14.391252  898831 fix.go:229] Guest: 2024-05-20 13:24:14.364745911 +0000 UTC Remote: 2024-05-20 13:24:14.279791405 +0000 UTC m=+44.684455202 (delta=84.954506ms)
	I0520 13:24:14.391296  898831 fix.go:200] guest clock delta is within tolerance: 84.954506ms
	I0520 13:24:14.391301  898831 start.go:83] releasing machines lock for "kubernetes-upgrade-785943", held for 22.38756272s
	I0520 13:24:14.391331  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:24:14.391646  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetIP
	I0520 13:24:14.395043  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.395437  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:14.395469  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.395661  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:24:14.396111  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:24:14.396290  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:24:14.396379  898831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:24:14.396441  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:14.396478  898831 ssh_runner.go:195] Run: cat /version.json
	I0520 13:24:14.396501  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:24:14.399236  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.399571  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:14.399599  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.399698  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.399713  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:14.399923  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:14.400094  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:14.400138  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:14.400165  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:14.400264  898831 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa Username:docker}
	I0520 13:24:14.400343  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:24:14.400491  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:24:14.400644  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:24:14.400782  898831 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa Username:docker}
	W0520 13:24:14.505412  898831 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:24:14.505503  898831 ssh_runner.go:195] Run: systemctl --version
	I0520 13:24:14.512023  898831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:24:14.678020  898831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:24:14.685308  898831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:24:14.685385  898831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:24:14.704718  898831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 13:24:14.704745  898831 start.go:494] detecting cgroup driver to use...
	I0520 13:24:14.704827  898831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:24:14.720889  898831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:24:14.741307  898831 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:24:14.741367  898831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:24:14.761497  898831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:24:14.781015  898831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:24:14.906339  898831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:24:15.075586  898831 docker.go:233] disabling docker service ...
	I0520 13:24:15.075660  898831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:24:15.090645  898831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:24:15.105736  898831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:24:15.241520  898831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:24:15.362395  898831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:24:15.376998  898831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:24:15.395741  898831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 13:24:15.395820  898831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:24:15.408163  898831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:24:15.408242  898831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:24:15.420382  898831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:24:15.432823  898831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:24:15.445021  898831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:24:15.457283  898831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:24:15.468599  898831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 13:24:15.468652  898831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 13:24:15.484351  898831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:24:15.495591  898831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:24:15.635805  898831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:24:15.802124  898831 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:24:15.802200  898831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:24:15.807182  898831 start.go:562] Will wait 60s for crictl version
	I0520 13:24:15.807242  898831 ssh_runner.go:195] Run: which crictl
	I0520 13:24:15.811163  898831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:24:15.848074  898831 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:24:15.848159  898831 ssh_runner.go:195] Run: crio --version
	I0520 13:24:15.875983  898831 ssh_runner.go:195] Run: crio --version
	I0520 13:24:15.905560  898831 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 13:24:15.907059  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetIP
	I0520 13:24:15.910003  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:15.910497  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:24:15.910527  898831 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:24:15.910745  898831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 13:24:15.915166  898831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:24:15.928631  898831 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-785943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-785943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:24:15.928769  898831 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 13:24:15.928833  898831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:24:15.967160  898831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 13:24:15.967254  898831 ssh_runner.go:195] Run: which lz4
	I0520 13:24:15.972315  898831 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 13:24:15.976803  898831 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 13:24:15.976836  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 13:24:17.692874  898831 crio.go:462] duration metric: took 1.720624304s to copy over tarball
	I0520 13:24:17.692958  898831 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 13:24:20.376934  898831 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.683935153s)
	I0520 13:24:20.376974  898831 crio.go:469] duration metric: took 2.684070105s to extract the tarball
	I0520 13:24:20.376985  898831 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 13:24:20.419967  898831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:24:20.470921  898831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 13:24:20.470948  898831 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 13:24:20.471032  898831 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 13:24:20.471057  898831 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 13:24:20.471062  898831 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 13:24:20.471070  898831 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 13:24:20.471032  898831 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:24:20.471081  898831 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 13:24:20.471032  898831 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 13:24:20.471143  898831 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 13:24:20.472850  898831 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 13:24:20.472849  898831 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:24:20.472911  898831 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 13:24:20.472853  898831 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 13:24:20.473049  898831 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 13:24:20.473074  898831 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 13:24:20.473023  898831 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 13:24:20.473177  898831 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 13:24:20.645064  898831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 13:24:20.647822  898831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 13:24:20.648766  898831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 13:24:20.651943  898831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 13:24:20.658934  898831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 13:24:20.672151  898831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 13:24:20.697447  898831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 13:24:20.776095  898831 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 13:24:20.776145  898831 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 13:24:20.776195  898831 ssh_runner.go:195] Run: which crictl
	I0520 13:24:20.795927  898831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 13:24:20.801657  898831 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 13:24:20.801710  898831 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 13:24:20.801759  898831 ssh_runner.go:195] Run: which crictl
	I0520 13:24:20.801777  898831 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 13:24:20.801816  898831 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 13:24:20.801892  898831 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 13:24:20.801934  898831 ssh_runner.go:195] Run: which crictl
	I0520 13:24:20.801823  898831 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 13:24:20.802026  898831 ssh_runner.go:195] Run: which crictl
	I0520 13:24:20.856115  898831 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 13:24:20.856174  898831 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 13:24:20.856234  898831 ssh_runner.go:195] Run: which crictl
	I0520 13:24:20.870590  898831 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 13:24:20.870632  898831 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 13:24:20.870644  898831 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 13:24:20.870666  898831 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 13:24:20.870703  898831 ssh_runner.go:195] Run: which crictl
	I0520 13:24:20.870778  898831 ssh_runner.go:195] Run: which crictl
	I0520 13:24:20.870704  898831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 13:24:20.995337  898831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 13:24:20.995381  898831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 13:24:20.995420  898831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 13:24:20.995483  898831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 13:24:20.995524  898831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 13:24:20.995569  898831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 13:24:20.995598  898831 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 13:24:21.138889  898831 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 13:24:21.138957  898831 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 13:24:21.138994  898831 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 13:24:21.138957  898831 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 13:24:21.139056  898831 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 13:24:21.139137  898831 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 13:24:21.139181  898831 cache_images.go:92] duration metric: took 668.219964ms to LoadCachedImages
	W0520 13:24:21.139278  898831 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18932-852915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0520 13:24:21.139297  898831 kubeadm.go:928] updating node { 192.168.50.63 8443 v1.20.0 crio true true} ...
	I0520 13:24:21.139436  898831 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-785943 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-785943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:24:21.139527  898831 ssh_runner.go:195] Run: crio config
	I0520 13:24:21.197233  898831 cni.go:84] Creating CNI manager for ""
	I0520 13:24:21.197265  898831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:24:21.197295  898831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:24:21.197331  898831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-785943 NodeName:kubernetes-upgrade-785943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 13:24:21.197561  898831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-785943"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:24:21.197652  898831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 13:24:21.210187  898831 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:24:21.210255  898831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:24:21.221641  898831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0520 13:24:21.239595  898831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:24:21.258539  898831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 13:24:21.276276  898831 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0520 13:24:21.280644  898831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:24:21.293886  898831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:24:21.441553  898831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:24:21.461703  898831 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943 for IP: 192.168.50.63
	I0520 13:24:21.461735  898831 certs.go:194] generating shared ca certs ...
	I0520 13:24:21.461760  898831 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:24:21.461963  898831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 13:24:21.462016  898831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 13:24:21.462035  898831 certs.go:256] generating profile certs ...
	I0520 13:24:21.462115  898831 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/client.key
	I0520 13:24:21.462137  898831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/client.crt with IP's: []
	I0520 13:24:21.525198  898831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/client.crt ...
	I0520 13:24:21.525231  898831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/client.crt: {Name:mkce220ea589920cf1a094a3b64a554cfc241747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:24:21.525417  898831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/client.key ...
	I0520 13:24:21.525440  898831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/client.key: {Name:mkf95fd1223d4c6d0e2be8ee287ea515e6680e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:24:21.525554  898831 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.key.1d6667eb
	I0520 13:24:21.525571  898831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.crt.1d6667eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.63]
	I0520 13:24:21.690286  898831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.crt.1d6667eb ...
	I0520 13:24:21.690320  898831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.crt.1d6667eb: {Name:mk60e1e1ab40422b0759f9d225b1155741ec8048 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:24:21.690486  898831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.key.1d6667eb ...
	I0520 13:24:21.690501  898831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.key.1d6667eb: {Name:mkfebee04131b2e7643c8b5a611b74caa1f0e163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:24:21.690570  898831 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.crt.1d6667eb -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.crt
	I0520 13:24:21.690648  898831 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.key.1d6667eb -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.key
	I0520 13:24:21.690698  898831 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.key
	I0520 13:24:21.690713  898831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.crt with IP's: []
	I0520 13:24:21.859056  898831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.crt ...
	I0520 13:24:21.859094  898831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.crt: {Name:mkf4ef454b71cb6e998e47d36a1af7bd1b8df490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:24:21.859290  898831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.key ...
	I0520 13:24:21.859305  898831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.key: {Name:mk91b051ae219b3e4b0d876e14c0e75ad3a259ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:24:21.859552  898831 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 13:24:21.859595  898831 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 13:24:21.859605  898831 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 13:24:21.859625  898831 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 13:24:21.859646  898831 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:24:21.859674  898831 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 13:24:21.859711  898831 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:24:21.860389  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:24:21.886461  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:24:21.910705  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:24:21.937680  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 13:24:21.962460  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 13:24:22.042323  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:24:22.070204  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:24:22.201888  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:24:22.226733  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 13:24:22.252161  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 13:24:22.278063  898831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:24:22.305293  898831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:24:22.334404  898831 ssh_runner.go:195] Run: openssl version
	I0520 13:24:22.342005  898831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 13:24:22.363933  898831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 13:24:22.371611  898831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:24:22.371720  898831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 13:24:22.379192  898831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:24:22.398081  898831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:24:22.410527  898831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:24:22.416605  898831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:24:22.416686  898831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:24:22.422820  898831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:24:22.434462  898831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 13:24:22.445754  898831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 13:24:22.450494  898831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:24:22.450546  898831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 13:24:22.456809  898831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 13:24:22.468555  898831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:24:22.472985  898831 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 13:24:22.473055  898831 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-785943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-785943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:24:22.473329  898831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:24:22.473394  898831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:24:22.518497  898831 cri.go:89] found id: ""
	I0520 13:24:22.518568  898831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 13:24:22.531009  898831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 13:24:22.542689  898831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 13:24:22.554647  898831 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 13:24:22.554668  898831 kubeadm.go:156] found existing configuration files:
	
	I0520 13:24:22.554711  898831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 13:24:22.564483  898831 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 13:24:22.564549  898831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 13:24:22.574648  898831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 13:24:22.584004  898831 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 13:24:22.584081  898831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 13:24:22.593846  898831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 13:24:22.604508  898831 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 13:24:22.604574  898831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 13:24:22.615580  898831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 13:24:22.625608  898831 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 13:24:22.625670  898831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 13:24:22.636164  898831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 13:24:22.911305  898831 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 13:26:20.999948  898831 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 13:26:21.000038  898831 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 13:26:21.001510  898831 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 13:26:21.001594  898831 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 13:26:21.001672  898831 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 13:26:21.001828  898831 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 13:26:21.001956  898831 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 13:26:21.002065  898831 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 13:26:21.004566  898831 out.go:204]   - Generating certificates and keys ...
	I0520 13:26:21.004670  898831 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 13:26:21.004758  898831 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 13:26:21.004848  898831 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 13:26:21.004932  898831 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 13:26:21.005030  898831 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 13:26:21.005102  898831 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 13:26:21.005179  898831 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 13:26:21.005315  898831 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-785943 localhost] and IPs [192.168.50.63 127.0.0.1 ::1]
	I0520 13:26:21.005366  898831 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 13:26:21.005506  898831 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-785943 localhost] and IPs [192.168.50.63 127.0.0.1 ::1]
	I0520 13:26:21.005609  898831 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 13:26:21.005693  898831 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 13:26:21.005755  898831 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 13:26:21.005806  898831 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 13:26:21.005855  898831 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 13:26:21.005900  898831 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 13:26:21.005952  898831 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 13:26:21.005997  898831 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 13:26:21.006095  898831 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 13:26:21.006213  898831 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 13:26:21.006275  898831 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 13:26:21.006376  898831 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 13:26:21.008757  898831 out.go:204]   - Booting up control plane ...
	I0520 13:26:21.008862  898831 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 13:26:21.008948  898831 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 13:26:21.009034  898831 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 13:26:21.009164  898831 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 13:26:21.009396  898831 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 13:26:21.009481  898831 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 13:26:21.009546  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:26:21.009723  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:26:21.009836  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:26:21.010095  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:26:21.010193  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:26:21.010397  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:26:21.010467  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:26:21.010623  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:26:21.010683  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:26:21.010878  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:26:21.010892  898831 kubeadm.go:309] 
	I0520 13:26:21.010926  898831 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 13:26:21.010965  898831 kubeadm.go:309] 		timed out waiting for the condition
	I0520 13:26:21.010974  898831 kubeadm.go:309] 
	I0520 13:26:21.011003  898831 kubeadm.go:309] 	This error is likely caused by:
	I0520 13:26:21.011034  898831 kubeadm.go:309] 		- The kubelet is not running
	I0520 13:26:21.011169  898831 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 13:26:21.011188  898831 kubeadm.go:309] 
	I0520 13:26:21.011338  898831 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 13:26:21.011387  898831 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 13:26:21.011436  898831 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 13:26:21.011446  898831 kubeadm.go:309] 
	I0520 13:26:21.011580  898831 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 13:26:21.011695  898831 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 13:26:21.011710  898831 kubeadm.go:309] 
	I0520 13:26:21.011864  898831 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 13:26:21.011988  898831 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 13:26:21.012095  898831 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 13:26:21.012204  898831 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 13:26:21.012239  898831 kubeadm.go:309] 
	W0520 13:26:21.012359  898831 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-785943 localhost] and IPs [192.168.50.63 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-785943 localhost] and IPs [192.168.50.63 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-785943 localhost] and IPs [192.168.50.63 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-785943 localhost] and IPs [192.168.50.63 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 13:26:21.012417  898831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 13:26:21.900444  898831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:26:21.914857  898831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 13:26:21.924492  898831 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 13:26:21.924516  898831 kubeadm.go:156] found existing configuration files:
	
	I0520 13:26:21.924570  898831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 13:26:21.933388  898831 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 13:26:21.933441  898831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 13:26:21.942488  898831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 13:26:21.951060  898831 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 13:26:21.951106  898831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 13:26:21.959976  898831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 13:26:21.968618  898831 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 13:26:21.968675  898831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 13:26:21.977565  898831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 13:26:21.986097  898831 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 13:26:21.986152  898831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 13:26:21.994944  898831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 13:26:22.062978  898831 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 13:26:22.063058  898831 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 13:26:22.214009  898831 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 13:26:22.214152  898831 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 13:26:22.214310  898831 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 13:26:22.434609  898831 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 13:26:22.436634  898831 out.go:204]   - Generating certificates and keys ...
	I0520 13:26:22.436737  898831 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 13:26:22.436867  898831 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 13:26:22.437008  898831 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 13:26:22.437132  898831 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 13:26:22.437218  898831 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 13:26:22.437266  898831 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 13:26:22.437832  898831 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 13:26:22.438519  898831 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 13:26:22.438942  898831 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 13:26:22.439437  898831 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 13:26:22.439581  898831 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 13:26:22.439657  898831 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 13:26:22.709865  898831 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 13:26:23.120564  898831 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 13:26:23.243986  898831 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 13:26:23.315947  898831 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 13:26:23.331665  898831 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 13:26:23.334616  898831 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 13:26:23.334863  898831 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 13:26:23.482015  898831 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 13:26:23.483870  898831 out.go:204]   - Booting up control plane ...
	I0520 13:26:23.484003  898831 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 13:26:23.485782  898831 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 13:26:23.495573  898831 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 13:26:23.496604  898831 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 13:26:23.499385  898831 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 13:27:03.503071  898831 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 13:27:03.503186  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:27:03.503479  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:27:08.503788  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:27:08.504078  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:27:18.504840  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:27:18.505135  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:27:38.504249  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:27:38.504441  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:28:18.504542  898831 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 13:28:18.504873  898831 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 13:28:18.504902  898831 kubeadm.go:309] 
	I0520 13:28:18.504961  898831 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 13:28:18.505015  898831 kubeadm.go:309] 		timed out waiting for the condition
	I0520 13:28:18.505024  898831 kubeadm.go:309] 
	I0520 13:28:18.505076  898831 kubeadm.go:309] 	This error is likely caused by:
	I0520 13:28:18.505118  898831 kubeadm.go:309] 		- The kubelet is not running
	I0520 13:28:18.505256  898831 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 13:28:18.505266  898831 kubeadm.go:309] 
	I0520 13:28:18.505432  898831 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 13:28:18.505498  898831 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 13:28:18.505559  898831 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 13:28:18.505570  898831 kubeadm.go:309] 
	I0520 13:28:18.505701  898831 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 13:28:18.505831  898831 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 13:28:18.505844  898831 kubeadm.go:309] 
	I0520 13:28:18.505995  898831 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 13:28:18.506128  898831 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 13:28:18.506269  898831 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 13:28:18.506384  898831 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 13:28:18.506403  898831 kubeadm.go:309] 
	I0520 13:28:18.506924  898831 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 13:28:18.507048  898831 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 13:28:18.507237  898831 kubeadm.go:393] duration metric: took 3m56.034188049s to StartCluster
	I0520 13:28:18.507302  898831 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 13:28:18.507370  898831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 13:28:18.507432  898831 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 13:28:18.552354  898831 cri.go:89] found id: ""
	I0520 13:28:18.552395  898831 logs.go:276] 0 containers: []
	W0520 13:28:18.552408  898831 logs.go:278] No container was found matching "kube-apiserver"
	I0520 13:28:18.552416  898831 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 13:28:18.552528  898831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 13:28:18.588902  898831 cri.go:89] found id: ""
	I0520 13:28:18.588952  898831 logs.go:276] 0 containers: []
	W0520 13:28:18.588961  898831 logs.go:278] No container was found matching "etcd"
	I0520 13:28:18.588967  898831 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 13:28:18.589035  898831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 13:28:18.633426  898831 cri.go:89] found id: ""
	I0520 13:28:18.633465  898831 logs.go:276] 0 containers: []
	W0520 13:28:18.633477  898831 logs.go:278] No container was found matching "coredns"
	I0520 13:28:18.633485  898831 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 13:28:18.633557  898831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 13:28:18.687056  898831 cri.go:89] found id: ""
	I0520 13:28:18.687082  898831 logs.go:276] 0 containers: []
	W0520 13:28:18.687091  898831 logs.go:278] No container was found matching "kube-scheduler"
	I0520 13:28:18.687097  898831 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 13:28:18.687156  898831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 13:28:18.730043  898831 cri.go:89] found id: ""
	I0520 13:28:18.730076  898831 logs.go:276] 0 containers: []
	W0520 13:28:18.730088  898831 logs.go:278] No container was found matching "kube-proxy"
	I0520 13:28:18.730095  898831 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 13:28:18.730161  898831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 13:28:18.768122  898831 cri.go:89] found id: ""
	I0520 13:28:18.768159  898831 logs.go:276] 0 containers: []
	W0520 13:28:18.768168  898831 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 13:28:18.768175  898831 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 13:28:18.768243  898831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 13:28:18.809344  898831 cri.go:89] found id: ""
	I0520 13:28:18.809377  898831 logs.go:276] 0 containers: []
	W0520 13:28:18.809390  898831 logs.go:278] No container was found matching "kindnet"
	I0520 13:28:18.809405  898831 logs.go:123] Gathering logs for describe nodes ...
	I0520 13:28:18.809424  898831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 13:28:18.954248  898831 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 13:28:18.954283  898831 logs.go:123] Gathering logs for CRI-O ...
	I0520 13:28:18.954302  898831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 13:28:19.059672  898831 logs.go:123] Gathering logs for container status ...
	I0520 13:28:19.059713  898831 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 13:28:19.105205  898831 logs.go:123] Gathering logs for kubelet ...
	I0520 13:28:19.105239  898831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 13:28:19.161008  898831 logs.go:123] Gathering logs for dmesg ...
	I0520 13:28:19.161051  898831 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0520 13:28:19.178669  898831 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 13:28:19.178718  898831 out.go:239] * 
	* 
	W0520 13:28:19.178794  898831 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 13:28:19.178815  898831 out.go:239] * 
	* 
	W0520 13:28:19.179987  898831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 13:28:19.183694  898831 out.go:177] 
	W0520 13:28:19.185582  898831 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 13:28:19.185637  898831 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 13:28:19.185672  898831 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 13:28:19.187101  898831 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-785943
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-785943: (1.425795091s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-785943 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-785943 status --format={{.Host}}: exit status 7 (69.033102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.535813262s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-785943 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (79.806135ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-785943] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-785943
	    minikube start -p kubernetes-upgrade-785943 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7859432 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-785943 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-785943 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.565389764s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-20 13:30:19.980596936 +0000 UTC m=+5892.135159382
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-785943 -n kubernetes-upgrade-785943
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-785943 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-785943 logs -n 25: (1.893021844s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-301514 sudo crio            | cilium-301514             | jenkins | v1.33.1 | 20 May 24 13:26 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-301514                      | cilium-301514             | jenkins | v1.33.1 | 20 May 24 13:26 UTC | 20 May 24 13:26 UTC |
	| start   | -p stopped-upgrade-456265             | minikube                  | jenkins | v1.26.0 | 20 May 24 13:26 UTC | 20 May 24 13:27 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-782572 sudo           | NoKubernetes-782572       | jenkins | v1.33.1 | 20 May 24 13:26 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-782572                | NoKubernetes-782572       | jenkins | v1.33.1 | 20 May 24 13:26 UTC | 20 May 24 13:26 UTC |
	| start   | -p cert-expiration-866786             | cert-expiration-866786    | jenkins | v1.33.1 | 20 May 24 13:26 UTC | 20 May 24 13:27 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-823294             | running-upgrade-823294    | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:27 UTC |
	| start   | -p force-systemd-flag-783351          | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-456265 stop           | minikube                  | jenkins | v1.26.0 | 20 May 24 13:27 UTC | 20 May 24 13:27 UTC |
	| start   | -p stopped-upgrade-456265             | stopped-upgrade-456265    | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-783351 ssh cat     | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-783351          | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p cert-options-043975                | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:29 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-456265             | stopped-upgrade-456265    | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p pause-587544 --memory=2048         | pause-587544              | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:29 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-043975 ssh               | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-043975 -- sudo        | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-043975                | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p auto-301514 --memory=3072          | auto-301514               | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:30 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:29 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:29 UTC | 20 May 24 13:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-587544                       | pause-587544              | jenkins | v1.33.1 | 20 May 24 13:29 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-301514 pgrep -a               | auto-301514               | jenkins | v1.33.1 | 20 May 24 13:30 UTC | 20 May 24 13:30 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:29:51
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:29:51.967602  906496 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:29:51.967740  906496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:29:51.967751  906496 out.go:304] Setting ErrFile to fd 2...
	I0520 13:29:51.967758  906496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:29:51.968054  906496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:29:51.968810  906496 out.go:298] Setting JSON to false
	I0520 13:29:51.970339  906496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11540,"bootTime":1716200252,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:29:51.970424  906496 start.go:139] virtualization: kvm guest
	I0520 13:29:52.034631  906496 out.go:177] * [pause-587544] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:29:52.203527  906496 notify.go:220] Checking for updates...
	I0520 13:29:52.296412  906496 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 13:29:52.431928  906496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:29:52.690297  906496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:29:52.850798  906496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:29:53.004996  906496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:29:53.159571  906496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:29:53.264599  906496 config.go:182] Loaded profile config "pause-587544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:53.265207  906496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:53.265274  906496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:53.281584  906496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0520 13:29:53.282146  906496 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:53.282795  906496 main.go:141] libmachine: Using API Version  1
	I0520 13:29:53.282823  906496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:53.283185  906496 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:53.283422  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:29:53.283709  906496 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:29:53.284111  906496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:53.284159  906496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:53.298952  906496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0520 13:29:53.299611  906496 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:53.300321  906496 main.go:141] libmachine: Using API Version  1
	I0520 13:29:53.300356  906496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:53.300850  906496 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:53.301128  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:29:53.460265  906496 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:29:53.610163  906496 start.go:297] selected driver: kvm2
	I0520 13:29:53.610216  906496 start.go:901] validating driver "kvm2" against &{Name:pause-587544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.1 ClusterName:pause-587544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:29:53.610430  906496 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:29:53.610967  906496 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:29:53.611087  906496 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:29:53.631881  906496 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:29:53.632849  906496 cni.go:84] Creating CNI manager for ""
	I0520 13:29:53.632868  906496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:29:53.632945  906496 start.go:340] cluster config:
	{Name:pause-587544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-587544 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:29:53.633146  906496 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:29:53.835172  906496 out.go:177] * Starting "pause-587544" primary control-plane node in "pause-587544" cluster
	I0520 13:29:53.912892  906496 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:29:53.913000  906496 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:29:53.913034  906496 cache.go:56] Caching tarball of preloaded images
	I0520 13:29:53.913153  906496 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:29:53.913176  906496 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:29:53.913297  906496 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/config.json ...
	I0520 13:29:53.962414  906496 start.go:360] acquireMachinesLock for pause-587544: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:29:53.962543  906496 start.go:364] duration metric: took 57.938µs to acquireMachinesLock for "pause-587544"
	I0520 13:29:53.962567  906496 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:29:53.962581  906496 fix.go:54] fixHost starting: 
	I0520 13:29:53.963044  906496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:53.963095  906496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:53.982907  906496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0520 13:29:53.983349  906496 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:53.983918  906496 main.go:141] libmachine: Using API Version  1
	I0520 13:29:53.983941  906496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:53.984341  906496 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:53.984577  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:29:53.984780  906496 main.go:141] libmachine: (pause-587544) Calling .GetState
	I0520 13:29:53.986673  906496 fix.go:112] recreateIfNeeded on pause-587544: state=Running err=<nil>
	W0520 13:29:53.986700  906496 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:29:54.119490  906496 out.go:177] * Updating the running kvm2 "pause-587544" VM ...
	I0520 13:29:52.310674  906094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:29:52.310704  906094 machine.go:97] duration metric: took 6.963338864s to provisionDockerMachine
	I0520 13:29:52.310720  906094 start.go:293] postStartSetup for "kubernetes-upgrade-785943" (driver="kvm2")
	I0520 13:29:52.310734  906094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:29:52.310771  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:29:52.311155  906094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:29:52.311194  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:29:52.314437  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.314882  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:29:52.314916  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.315091  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:29:52.315307  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:29:52.315528  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:29:52.315786  906094 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa Username:docker}
	I0520 13:29:52.407384  906094 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:29:52.412530  906094 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:29:52.412559  906094 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 13:29:52.412629  906094 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 13:29:52.412727  906094 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 13:29:52.412863  906094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:29:52.423056  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:29:52.454454  906094 start.go:296] duration metric: took 143.718311ms for postStartSetup
	I0520 13:29:52.454501  906094 fix.go:56] duration metric: took 7.134652368s for fixHost
	I0520 13:29:52.454535  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:29:52.457728  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.458150  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:29:52.458184  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.458320  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:29:52.458564  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:29:52.458755  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:29:52.458984  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:29:52.459212  906094 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:52.459465  906094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0520 13:29:52.459486  906094 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 13:29:52.576139  906094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211792.569221962
	
	I0520 13:29:52.576165  906094 fix.go:216] guest clock: 1716211792.569221962
	I0520 13:29:52.576174  906094 fix.go:229] Guest: 2024-05-20 13:29:52.569221962 +0000 UTC Remote: 2024-05-20 13:29:52.454506335 +0000 UTC m=+42.036033876 (delta=114.715627ms)
	I0520 13:29:52.576224  906094 fix.go:200] guest clock delta is within tolerance: 114.715627ms
	I0520 13:29:52.576233  906094 start.go:83] releasing machines lock for "kubernetes-upgrade-785943", held for 7.256418542s
	I0520 13:29:52.576262  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:29:52.576550  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetIP
	I0520 13:29:52.579474  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.579838  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:29:52.579869  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.580008  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:29:52.580682  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:29:52.580901  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .DriverName
	I0520 13:29:52.581015  906094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:29:52.581073  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:29:52.581141  906094 ssh_runner.go:195] Run: cat /version.json
	I0520 13:29:52.581169  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHHostname
	I0520 13:29:52.583956  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.584089  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.584365  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:29:52.584393  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.584424  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:29:52.584439  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:52.584512  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:29:52.584744  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHPort
	I0520 13:29:52.584734  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:29:52.585003  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:29:52.585025  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHKeyPath
	I0520 13:29:52.585179  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetSSHUsername
	I0520 13:29:52.585177  906094 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa Username:docker}
	I0520 13:29:52.585308  906094 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kubernetes-upgrade-785943/id_rsa Username:docker}
	W0520 13:29:52.669425  906094 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:29:52.669514  906094 ssh_runner.go:195] Run: systemctl --version
	I0520 13:29:52.693990  906094 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:29:52.854084  906094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:29:52.863877  906094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:29:52.863946  906094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:29:52.875769  906094 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 13:29:52.875791  906094 start.go:494] detecting cgroup driver to use...
	I0520 13:29:52.875878  906094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:29:52.893962  906094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:29:52.909340  906094 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:29:52.909413  906094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:29:52.924198  906094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:29:52.939422  906094 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:29:53.141694  906094 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:29:53.286727  906094 docker.go:233] disabling docker service ...
	I0520 13:29:53.286792  906094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:29:53.305882  906094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:29:53.320113  906094 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:29:53.483352  906094 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:29:53.627068  906094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:29:53.645323  906094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:29:53.669905  906094 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:29:53.670005  906094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:53.681041  906094 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:29:53.681187  906094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:53.692670  906094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:53.703536  906094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:53.715301  906094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:29:53.726776  906094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:53.737907  906094 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:53.750475  906094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:29:53.762096  906094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:29:53.772224  906094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:29:53.782285  906094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:53.935476  906094 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:29:50.901373  905860 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.493102913s)
	I0520 13:29:50.901404  905860 crio.go:469] duration metric: took 2.49324086s to extract the tarball
	I0520 13:29:50.901412  905860 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 13:29:50.939319  905860 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:29:50.983165  905860 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:29:50.983200  905860 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:29:50.983212  905860 kubeadm.go:928] updating node { 192.168.72.8 8443 v1.30.1 crio true true} ...
	I0520 13:29:50.983376  905860 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-301514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:auto-301514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:29:50.983543  905860 ssh_runner.go:195] Run: crio config
	I0520 13:29:51.035123  905860 cni.go:84] Creating CNI manager for ""
	I0520 13:29:51.035147  905860 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:29:51.035168  905860 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:29:51.035197  905860 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.8 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-301514 NodeName:auto-301514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:29:51.035351  905860 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-301514"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:29:51.035411  905860 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:29:51.045857  905860 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:29:51.045949  905860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:29:51.055629  905860 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 13:29:51.072490  905860 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:29:51.088984  905860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2149 bytes)
	I0520 13:29:51.108085  905860 ssh_runner.go:195] Run: grep 192.168.72.8	control-plane.minikube.internal$ /etc/hosts
	I0520 13:29:51.111882  905860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 13:29:51.125305  905860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:51.248384  905860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:29:51.266295  905860 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514 for IP: 192.168.72.8
	I0520 13:29:51.266323  905860 certs.go:194] generating shared ca certs ...
	I0520 13:29:51.266346  905860 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:51.266541  905860 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 13:29:51.266602  905860 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 13:29:51.266615  905860 certs.go:256] generating profile certs ...
	I0520 13:29:51.266692  905860 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/client.key
	I0520 13:29:51.266711  905860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/client.crt with IP's: []
	I0520 13:29:51.585571  905860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/client.crt ...
	I0520 13:29:51.585601  905860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/client.crt: {Name:mkb0e1c1f883c840987cb894f829db566504588c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:51.585779  905860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/client.key ...
	I0520 13:29:51.585794  905860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/client.key: {Name:mk7cd46a674c7851ef993cad344e69b474aacebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:51.585902  905860 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.key.82064bd6
	I0520 13:29:51.585920  905860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.crt.82064bd6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.8]
	I0520 13:29:51.666927  905860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.crt.82064bd6 ...
	I0520 13:29:51.666965  905860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.crt.82064bd6: {Name:mk05003a2ecabb90d08c5903a396f84f21c99881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:51.667174  905860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.key.82064bd6 ...
	I0520 13:29:51.667194  905860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.key.82064bd6: {Name:mkbe3daa105649d82859198d6a2e173739b949a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:51.667294  905860 certs.go:381] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.crt.82064bd6 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.crt
	I0520 13:29:51.667398  905860 certs.go:385] copying /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.key.82064bd6 -> /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.key
	I0520 13:29:51.667461  905860 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/proxy-client.key
	I0520 13:29:51.667476  905860 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/proxy-client.crt with IP's: []
	I0520 13:29:51.882037  905860 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/proxy-client.crt ...
	I0520 13:29:51.882081  905860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/proxy-client.crt: {Name:mkc9288f398ae48f6e1d90dcd6d05dcdbcea47e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:51.882288  905860 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/proxy-client.key ...
	I0520 13:29:51.882304  905860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/proxy-client.key: {Name:mkcaaac5a952a9970f55e41b7bf68f2f54be625e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:51.882545  905860 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 13:29:51.882598  905860 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 13:29:51.882613  905860 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 13:29:51.882647  905860 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 13:29:51.882682  905860 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:29:51.882711  905860 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 13:29:51.882771  905860 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:29:51.883742  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:29:51.918244  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:29:51.954950  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:29:51.997735  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 13:29:52.031382  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0520 13:29:52.058488  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 13:29:52.177211  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:29:52.208953  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:29:52.237202  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 13:29:52.264287  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:29:52.292071  905860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 13:29:52.324998  905860 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:29:52.345205  905860 ssh_runner.go:195] Run: openssl version
	I0520 13:29:52.352092  905860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:29:52.367107  905860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:52.371867  905860 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:52.371926  905860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:52.377753  905860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:29:52.388914  905860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 13:29:52.401104  905860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 13:29:52.405776  905860 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:29:52.405831  905860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 13:29:52.412587  905860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 13:29:52.424793  905860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 13:29:52.438681  905860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 13:29:52.443883  905860 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:29:52.443969  905860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 13:29:52.450572  905860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:29:52.463997  905860 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:29:52.468511  905860 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 13:29:52.468564  905860 kubeadm.go:391] StartCluster: {Name:auto-301514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:auto-301514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:29:52.468658  905860 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:29:52.468734  905860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:29:52.510600  905860 cri.go:89] found id: ""
	I0520 13:29:52.510689  905860 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 13:29:52.521800  905860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 13:29:52.532972  905860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 13:29:52.543883  905860 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 13:29:52.543906  905860 kubeadm.go:156] found existing configuration files:
	
	I0520 13:29:52.543952  905860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 13:29:52.553695  905860 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 13:29:52.553747  905860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 13:29:52.564691  905860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 13:29:52.575403  905860 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 13:29:52.575471  905860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 13:29:52.587703  905860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 13:29:52.597909  905860 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 13:29:52.598031  905860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 13:29:52.609542  905860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 13:29:52.622264  905860 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 13:29:52.622322  905860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 13:29:52.631743  905860 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 13:29:52.872670  905860 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 13:29:54.150213  906496 machine.go:94] provisionDockerMachine start ...
	I0520 13:29:54.150288  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:29:54.150676  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.154407  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.154829  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.154872  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.155088  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.155243  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.155448  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.155600  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.155796  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:54.156031  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:29:54.156051  906496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:29:54.268832  906496 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-587544
	
	I0520 13:29:54.268867  906496 main.go:141] libmachine: (pause-587544) Calling .GetMachineName
	I0520 13:29:54.269106  906496 buildroot.go:166] provisioning hostname "pause-587544"
	I0520 13:29:54.269125  906496 main.go:141] libmachine: (pause-587544) Calling .GetMachineName
	I0520 13:29:54.269344  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.272130  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.272497  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.272528  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.272637  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.272836  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.273011  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.273175  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.273330  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:54.273513  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:29:54.273530  906496 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-587544 && echo "pause-587544" | sudo tee /etc/hostname
	I0520 13:29:54.401420  906496 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-587544
	
	I0520 13:29:54.401453  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.404565  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.404917  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.404949  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.405139  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.405343  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.405496  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.405638  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.405814  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:54.406015  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:29:54.406040  906496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-587544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-587544/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-587544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:29:54.513394  906496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:29:54.513432  906496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 13:29:54.513455  906496 buildroot.go:174] setting up certificates
	I0520 13:29:54.513466  906496 provision.go:84] configureAuth start
	I0520 13:29:54.513480  906496 main.go:141] libmachine: (pause-587544) Calling .GetMachineName
	I0520 13:29:54.513807  906496 main.go:141] libmachine: (pause-587544) Calling .GetIP
	I0520 13:29:54.516623  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.517058  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.517085  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.517239  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.519760  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.520189  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.520224  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.520296  906496 provision.go:143] copyHostCerts
	I0520 13:29:54.520356  906496 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 13:29:54.520369  906496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 13:29:54.520432  906496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 13:29:54.520556  906496 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 13:29:54.520569  906496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 13:29:54.520596  906496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 13:29:54.520671  906496 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 13:29:54.520682  906496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 13:29:54.520707  906496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 13:29:54.520764  906496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.pause-587544 san=[127.0.0.1 192.168.61.6 localhost minikube pause-587544]
	I0520 13:29:54.669459  906496 provision.go:177] copyRemoteCerts
	I0520 13:29:54.669523  906496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:29:54.669550  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.672464  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.672871  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.672902  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.673116  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.673305  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.673460  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.673574  906496 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/pause-587544/id_rsa Username:docker}
	I0520 13:29:54.758523  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0520 13:29:54.783719  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 13:29:54.809107  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 13:29:54.834989  906496 provision.go:87] duration metric: took 321.489994ms to configureAuth
	I0520 13:29:54.835028  906496 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:29:54.835349  906496 config.go:182] Loaded profile config "pause-587544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:54.835500  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.838632  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.839084  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.839113  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.839311  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.839533  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.839818  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.839981  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.840205  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:54.840421  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:29:54.840440  906496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:29:58.345155  906094 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.409615056s)
	I0520 13:29:58.345204  906094 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:29:58.345266  906094 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:29:58.352812  906094 start.go:562] Will wait 60s for crictl version
	I0520 13:29:58.352892  906094 ssh_runner.go:195] Run: which crictl
	I0520 13:29:58.357135  906094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:29:58.398008  906094 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:29:58.398136  906094 ssh_runner.go:195] Run: crio --version
	I0520 13:29:58.433520  906094 ssh_runner.go:195] Run: crio --version
	I0520 13:29:58.482827  906094 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:29:58.484062  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) Calling .GetIP
	I0520 13:29:58.487364  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:58.487812  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f4:b5", ip: ""} in network mk-kubernetes-upgrade-785943: {Iface:virbr2 ExpiryTime:2024-05-20 14:24:06 +0000 UTC Type:0 Mac:52:54:00:62:f4:b5 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:kubernetes-upgrade-785943 Clientid:01:52:54:00:62:f4:b5}
	I0520 13:29:58.487842  906094 main.go:141] libmachine: (kubernetes-upgrade-785943) DBG | domain kubernetes-upgrade-785943 has defined IP address 192.168.50.63 and MAC address 52:54:00:62:f4:b5 in network mk-kubernetes-upgrade-785943
	I0520 13:29:58.488245  906094 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 13:29:58.494055  906094 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-785943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:kubernetes-upgrade-785943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:29:58.494221  906094 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:29:58.494288  906094 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:29:58.547252  906094 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:29:58.547281  906094 crio.go:433] Images already preloaded, skipping extraction
	I0520 13:29:58.547364  906094 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:29:58.585495  906094 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:29:58.585524  906094 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:29:58.585535  906094 kubeadm.go:928] updating node { 192.168.50.63 8443 v1.30.1 crio true true} ...
	I0520 13:29:58.585694  906094 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-785943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-785943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:29:58.585796  906094 ssh_runner.go:195] Run: crio config
	I0520 13:29:58.640212  906094 cni.go:84] Creating CNI manager for ""
	I0520 13:29:58.640231  906094 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:29:58.640250  906094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:29:58.640314  906094 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-785943 NodeName:kubernetes-upgrade-785943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:29:58.640478  906094 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-785943"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:29:58.640563  906094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:29:58.651254  906094 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:29:58.651336  906094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:29:58.661208  906094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0520 13:29:58.679955  906094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:29:58.697967  906094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0520 13:29:58.715892  906094 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0520 13:29:58.720313  906094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:29:58.863515  906094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:29:58.879257  906094 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943 for IP: 192.168.50.63
	I0520 13:29:58.879287  906094 certs.go:194] generating shared ca certs ...
	I0520 13:29:58.879309  906094 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:29:58.879506  906094 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 13:29:58.879566  906094 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 13:29:58.879581  906094 certs.go:256] generating profile certs ...
	I0520 13:29:58.879699  906094 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/client.key
	I0520 13:29:58.879769  906094 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.key.1d6667eb
	I0520 13:29:58.879821  906094 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.key
	I0520 13:29:58.879986  906094 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 13:29:58.880044  906094 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 13:29:58.880059  906094 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 13:29:58.880102  906094 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 13:29:58.880137  906094 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:29:58.880170  906094 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 13:29:58.880225  906094 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:29:58.881253  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:29:58.906976  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:29:58.931626  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:29:58.956463  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 13:29:58.982578  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 13:29:59.011269  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:29:59.039888  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:29:59.066840  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kubernetes-upgrade-785943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 13:29:59.095332  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 13:29:59.125658  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:29:59.154045  906094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 13:29:59.181243  906094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:29:59.201855  906094 ssh_runner.go:195] Run: openssl version
	I0520 13:29:59.208378  906094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 13:29:59.219715  906094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 13:29:59.224305  906094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:29:59.224418  906094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 13:29:59.230590  906094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:29:59.242995  906094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:29:59.255471  906094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:59.260273  906094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:59.260335  906094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:29:59.266405  906094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:29:59.279490  906094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 13:29:59.294593  906094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 13:29:59.300267  906094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:29:59.300335  906094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 13:29:59.306041  906094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 13:29:59.316260  906094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:29:59.321308  906094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:29:59.327113  906094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:29:59.333004  906094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:29:59.338927  906094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:29:59.344586  906094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:29:59.350484  906094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:29:59.364890  906094 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-785943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.1 ClusterName:kubernetes-upgrade-785943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:29:59.365025  906094 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:29:59.365088  906094 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:29:59.597355  906094 cri.go:89] found id: "7f6f15a0de81416218e4bf68ed0d9af1ab1b35c091d78ce6c8e1435f2133339c"
	I0520 13:29:59.597389  906094 cri.go:89] found id: "e5c1a30091fb768ec070845322187c7422b69ec9e9ebc8d15069ff6e4766c403"
	I0520 13:29:59.597400  906094 cri.go:89] found id: "3197d84ffd13821b58666f58f0845db4efb8e3740f4413fea656d3cfa018c419"
	I0520 13:29:59.597430  906094 cri.go:89] found id: "934914d4fe7faf9c77fd531ae881cce7a88580f5a6a835001c8b9d83c150cce2"
	I0520 13:29:59.597435  906094 cri.go:89] found id: "856ed9c429d472374c45a6e3199eff8b3d21a4213a184a807243cf8689123d64"
	I0520 13:29:59.597439  906094 cri.go:89] found id: "59c12c69e33f8ba6268f4899e5c2d4c2a9351f40f3e9d50867fe09dbf95d2f0c"
	I0520 13:29:59.597443  906094 cri.go:89] found id: "bf5bfedd3d47555e0e06c9a057ad4703dd6058bfa34c1a2309519082556afcb6"
	I0520 13:29:59.597447  906094 cri.go:89] found id: "6de6a798a2eab8a43f58d6fbe27d1ce55506c8062639c3f76c27bc499ab2cccd"
	I0520 13:29:59.597451  906094 cri.go:89] found id: ""
	I0520 13:29:59.597509  906094 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.806936744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cca3ac9e-9919-484f-8846-2a50f3cfbea8 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.808648187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8cefbc8-9d77-49b8-91d1-9b089bfed0b9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.809287657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211820809255251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8cefbc8-9d77-49b8-91d1-9b089bfed0b9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.809986908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e90f14e-9aaa-4ba6-8dd1-9b61f77b7f1a name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.810040079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e90f14e-9aaa-4ba6-8dd1-9b61f77b7f1a name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.810466950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a182ed5fdcccf86bcc863e66a8293ec759a3f3f5d2eaa476ab9150931674ab7f,PodSandboxId:a457973b9f34bfd62476cc6c121040810d41af9a722f7c3fcc966cbe51fc3224,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211817181453067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3430ead-92fe-47c3-b8ac-0b4fcb673845,},Annotations:map[string]string{io.kubernetes.container.hash: 49609427,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e1e4f94c3efed6d4b5d6a42706e55950d155a2efb921602ed2e63f32e8a9f2d,PodSandboxId:94b14f32a37326069f97a72ca23765f3f4c00136c693903714b360e7e2635887,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211813344629670,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e4026a40fd6d5fdf695f8e4a5bd0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 379da08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc95d6da4ab5753f62dce158a57e829125c5eefcb3834981827b5d68c0ca612a,PodSandboxId:554ce10e2f7766633f5a18063cfc52f72ddd717c4a74fc874ef598dc0a3173e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211813338763794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d382d74d04a571b3aff80808df5c97,},Annotations:map[string]string{io.kubernetes.container.hash: 147d16dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defdbe06ace968d791cba3fa608905910a490590d152d0d3c3279b4716ae3590,PodSandboxId:ca4f908da58f51ed1eeac6dbb39f4211f8e7ad7907f8779b87c4ea6367300372,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211801274614264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5mrq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e72b900a-c0a1-451d-b469-04b75f9010a4,},Annotations:map[string]string{io.kubernetes.container.hash: 910e5eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0af42df609c00a089a4a6701abe4cfca2d7ad08d6cc5841981e6525b19bec95,PodSandboxId:66b900a4498f23c250f3f187b0702127303341147da082c6a1af74e63adf2a8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211801219045092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ssqvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b40a0d-02aa-48bc-a780-5613a2757e
a7,},Annotations:map[string]string{io.kubernetes.container.hash: a26182e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d619c81f40384b03da113ac26c57da4ca746215c6a1027ba9e05ac4436644c44,PodSandboxId:2705b4844efa913da1dfeda7e4f7e85d7dcf3adee8cb7342d6de3897bd7682bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211799883541591,Labels:map[s
tring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l865c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b52ab72-287d-496e-9a1a-24789bb86ba9,},Annotations:map[string]string{io.kubernetes.container.hash: 476206cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b28f21ced0af52d956b474fec537e3723a7474e5f7a099bea3a06a699221cf1,PodSandboxId:a457973b9f34bfd62476cc6c121040810d41af9a722f7c3fcc966cbe51fc3224,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716211799962816468,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3430ead-92fe-47c3-b8ac-0b4fcb673845,},Annotations:map[string]string{io.kubernetes.container.hash: 49609427,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f574099752b9926352d5bfe3dda9ded33e5ea591ffdaf0355fbcd37178ef256c,PodSandboxId:634529428a1859d21dd1a71ada31bc5fb996e8156b5a13b9765ad1b6d234fbd9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211800204173777,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa324a0f157f091979189a0b05ed16e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85db1a83907f4360e4ab17e2b7a1e0eb8945b4f5fd97d2c16b08e1e1882ce77,PodSandboxId:77be6f71833563666755e01f87fdd5e7f36c2c01606c1b1b9600567df4319af6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211800156288092,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5833fa04724896a20990cfdd2567a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f3ec1030687f02bbfaee8f0f815098c06594f9bc8338445762b9568a0d01d,PodSandboxId:94b14f32a37326069f97a72ca23765f3f4c00136c693903714b360e7e2635887,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211799882653226,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e4026a40fd6d5fdf695f8e4a5bd0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 379da08a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02356004940dd166f1a40abef64f938ece208ecbe0bbcc5bf4b4241ec58a67df,PodSandboxId:554ce10e2f7766633f5a18063cfc52f72ddd717c4a74fc874ef598dc0a3173e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211799784301688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d382d74d04a571b3aff80808df5c97,},Annotations:map[string]string{io.kubernetes.container.hash: 147d16dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c1a30091fb768ec070845322187c7422b69ec9e9ebc8d15069ff6e4766c403,PodSandboxId:83fa8bb5384799c37a1e03ea3dc5d318ccb2203ff4555584d1ba5f0738e46a74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211758297662753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name:
coredns-7db6d8ff4d-5mrq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e72b900a-c0a1-451d-b469-04b75f9010a4,},Annotations:map[string]string{io.kubernetes.container.hash: 910e5eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3197d84ffd13821b58666f58f0845db4efb8e3740f4413fea656d3cfa018c419,PodSandboxId:ccf5351b6ba9c77f22a43028851b32f2616c6e841b80cfd8f642f55869c444c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410
dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211758109001819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ssqvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b40a0d-02aa-48bc-a780-5613a2757ea7,},Annotations:map[string]string{io.kubernetes.container.hash: a26182e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:934914d4fe7faf9c77fd531ae881cce7a88580f5a6a835001c8b9d83c150cce2,PodSandboxId:2f1f8cb3b757c078b4e8c9392d76fa1b7c0b9dab20350ea4bf23cf50fc8c03da,Metadata:&ContainerMetadata{Name:kube-
proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211757967319614,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l865c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b52ab72-287d-496e-9a1a-24789bb86ba9,},Annotations:map[string]string{io.kubernetes.container.hash: 476206cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ed9c429d472374c45a6e3199eff8b3d21a4213a184a807243cf8689123d64,PodSandboxId:ccf9f902d8da4f64109dc0ae9b23e0937ee8440d2b7600d01dab94d76ad460fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Image
Spec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211738781540983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa324a0f157f091979189a0b05ed16e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5bfedd3d47555e0e06c9a057ad4703dd6058bfa34c1a2309519082556afcb6,PodSandboxId:273c9b470b77158d7649eea2c40655e5f0fd309d19a1493d81956fadd99ddbaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Im
ageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211738719675341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5833fa04724896a20990cfdd2567a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e90f14e-9aaa-4ba6-8dd1-9b61f77b7f1a name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.862854319Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c2dcea1-3541-4169-9133-aa4d0d96afe7 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.862968266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c2dcea1-3541-4169-9133-aa4d0d96afe7 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.864870569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2ed2fb1-e5ae-4f28-b76b-89cca1696a4b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.865619447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211820865352929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2ed2fb1-e5ae-4f28-b76b-89cca1696a4b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.866385584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4c18ea2-ee90-4166-9147-dad8a9d84e7d name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.866443713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4c18ea2-ee90-4166-9147-dad8a9d84e7d name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.866959470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a182ed5fdcccf86bcc863e66a8293ec759a3f3f5d2eaa476ab9150931674ab7f,PodSandboxId:a457973b9f34bfd62476cc6c121040810d41af9a722f7c3fcc966cbe51fc3224,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211817181453067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3430ead-92fe-47c3-b8ac-0b4fcb673845,},Annotations:map[string]string{io.kubernetes.container.hash: 49609427,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e1e4f94c3efed6d4b5d6a42706e55950d155a2efb921602ed2e63f32e8a9f2d,PodSandboxId:94b14f32a37326069f97a72ca23765f3f4c00136c693903714b360e7e2635887,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211813344629670,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e4026a40fd6d5fdf695f8e4a5bd0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 379da08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc95d6da4ab5753f62dce158a57e829125c5eefcb3834981827b5d68c0ca612a,PodSandboxId:554ce10e2f7766633f5a18063cfc52f72ddd717c4a74fc874ef598dc0a3173e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211813338763794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d382d74d04a571b3aff80808df5c97,},Annotations:map[string]string{io.kubernetes.container.hash: 147d16dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defdbe06ace968d791cba3fa608905910a490590d152d0d3c3279b4716ae3590,PodSandboxId:ca4f908da58f51ed1eeac6dbb39f4211f8e7ad7907f8779b87c4ea6367300372,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211801274614264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5mrq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e72b900a-c0a1-451d-b469-04b75f9010a4,},Annotations:map[string]string{io.kubernetes.container.hash: 910e5eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0af42df609c00a089a4a6701abe4cfca2d7ad08d6cc5841981e6525b19bec95,PodSandboxId:66b900a4498f23c250f3f187b0702127303341147da082c6a1af74e63adf2a8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211801219045092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ssqvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b40a0d-02aa-48bc-a780-5613a2757e
a7,},Annotations:map[string]string{io.kubernetes.container.hash: a26182e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d619c81f40384b03da113ac26c57da4ca746215c6a1027ba9e05ac4436644c44,PodSandboxId:2705b4844efa913da1dfeda7e4f7e85d7dcf3adee8cb7342d6de3897bd7682bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211799883541591,Labels:map[s
tring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l865c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b52ab72-287d-496e-9a1a-24789bb86ba9,},Annotations:map[string]string{io.kubernetes.container.hash: 476206cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b28f21ced0af52d956b474fec537e3723a7474e5f7a099bea3a06a699221cf1,PodSandboxId:a457973b9f34bfd62476cc6c121040810d41af9a722f7c3fcc966cbe51fc3224,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716211799962816468,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3430ead-92fe-47c3-b8ac-0b4fcb673845,},Annotations:map[string]string{io.kubernetes.container.hash: 49609427,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f574099752b9926352d5bfe3dda9ded33e5ea591ffdaf0355fbcd37178ef256c,PodSandboxId:634529428a1859d21dd1a71ada31bc5fb996e8156b5a13b9765ad1b6d234fbd9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211800204173777,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa324a0f157f091979189a0b05ed16e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85db1a83907f4360e4ab17e2b7a1e0eb8945b4f5fd97d2c16b08e1e1882ce77,PodSandboxId:77be6f71833563666755e01f87fdd5e7f36c2c01606c1b1b9600567df4319af6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211800156288092,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5833fa04724896a20990cfdd2567a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f3ec1030687f02bbfaee8f0f815098c06594f9bc8338445762b9568a0d01d,PodSandboxId:94b14f32a37326069f97a72ca23765f3f4c00136c693903714b360e7e2635887,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211799882653226,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e4026a40fd6d5fdf695f8e4a5bd0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 379da08a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02356004940dd166f1a40abef64f938ece208ecbe0bbcc5bf4b4241ec58a67df,PodSandboxId:554ce10e2f7766633f5a18063cfc52f72ddd717c4a74fc874ef598dc0a3173e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211799784301688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d382d74d04a571b3aff80808df5c97,},Annotations:map[string]string{io.kubernetes.container.hash: 147d16dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c1a30091fb768ec070845322187c7422b69ec9e9ebc8d15069ff6e4766c403,PodSandboxId:83fa8bb5384799c37a1e03ea3dc5d318ccb2203ff4555584d1ba5f0738e46a74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211758297662753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name:
coredns-7db6d8ff4d-5mrq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e72b900a-c0a1-451d-b469-04b75f9010a4,},Annotations:map[string]string{io.kubernetes.container.hash: 910e5eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3197d84ffd13821b58666f58f0845db4efb8e3740f4413fea656d3cfa018c419,PodSandboxId:ccf5351b6ba9c77f22a43028851b32f2616c6e841b80cfd8f642f55869c444c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410
dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211758109001819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ssqvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b40a0d-02aa-48bc-a780-5613a2757ea7,},Annotations:map[string]string{io.kubernetes.container.hash: a26182e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:934914d4fe7faf9c77fd531ae881cce7a88580f5a6a835001c8b9d83c150cce2,PodSandboxId:2f1f8cb3b757c078b4e8c9392d76fa1b7c0b9dab20350ea4bf23cf50fc8c03da,Metadata:&ContainerMetadata{Name:kube-
proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211757967319614,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l865c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b52ab72-287d-496e-9a1a-24789bb86ba9,},Annotations:map[string]string{io.kubernetes.container.hash: 476206cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ed9c429d472374c45a6e3199eff8b3d21a4213a184a807243cf8689123d64,PodSandboxId:ccf9f902d8da4f64109dc0ae9b23e0937ee8440d2b7600d01dab94d76ad460fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Image
Spec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211738781540983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa324a0f157f091979189a0b05ed16e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5bfedd3d47555e0e06c9a057ad4703dd6058bfa34c1a2309519082556afcb6,PodSandboxId:273c9b470b77158d7649eea2c40655e5f0fd309d19a1493d81956fadd99ddbaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Im
ageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211738719675341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5833fa04724896a20990cfdd2567a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4c18ea2-ee90-4166-9147-dad8a9d84e7d name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.913639664Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f1d2fea-36dc-4b0f-adf7-d6d9cd1d69df name=/runtime.v1.RuntimeService/Version
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.913745152Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f1d2fea-36dc-4b0f-adf7-d6d9cd1d69df name=/runtime.v1.RuntimeService/Version
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.917833061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc6021a3-7260-49a6-81f4-fe64a9c23868 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.919055600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211820918994842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc6021a3-7260-49a6-81f4-fe64a9c23868 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.920504979Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5d9302e-cbaf-48dc-bf61-580928b59e97 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.920801889Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ca4f908da58f51ed1eeac6dbb39f4211f8e7ad7907f8779b87c4ea6367300372,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5mrq5,Uid:e72b900a-c0a1-451d-b469-04b75f9010a4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716211799896803851,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5mrq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e72b900a-c0a1-451d-b469-04b75f9010a4,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:29:17.437360839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66b900a4498f23c250f3f187b0702127303341147da082c6a1af74e63adf2a8f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ssqvj,Uid:53b40a0d-02aa-48bc-a780-5613a2757ea7,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716211799761447755,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ssqvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b40a0d-02aa-48bc-a780-5613a2757ea7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:29:17.407303491Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94b14f32a37326069f97a72ca23765f3f4c00136c693903714b360e7e2635887,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-785943,Uid:f8e4026a40fd6d5fdf695f8e4a5bd0f7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716211799629571169,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e4026a40fd6d5fdf695f8e4a5bd0f7,tier: control-plane,},Annotations:map[string]string{kub
eadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.63:2379,kubernetes.io/config.hash: f8e4026a40fd6d5fdf695f8e4a5bd0f7,kubernetes.io/config.seen: 2024-05-20T13:28:58.034424093Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:634529428a1859d21dd1a71ada31bc5fb996e8156b5a13b9765ad1b6d234fbd9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-785943,Uid:ffa324a0f157f091979189a0b05ed16e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716211799604975307,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa324a0f157f091979189a0b05ed16e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ffa324a0f157f091979189a0b05ed16e,kubernetes.io/config.seen: 2024-05-20T13:28:58.012634198Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:77be6f7183356366
6755e01f87fdd5e7f36c2c01606c1b1b9600567df4319af6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-785943,Uid:14c5833fa04724896a20990cfdd2567a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716211799554340937,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5833fa04724896a20990cfdd2567a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 14c5833fa04724896a20990cfdd2567a,kubernetes.io/config.seen: 2024-05-20T13:28:58.012633169Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a457973b9f34bfd62476cc6c121040810d41af9a722f7c3fcc966cbe51fc3224,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e3430ead-92fe-47c3-b8ac-0b4fcb673845,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716211799453682038,Labels:map[string]stri
ng{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3430ead-92fe-47c3-b8ac-0b4fcb673845,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/con
fig.seen: 2024-05-20T13:29:16.460049469Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:554ce10e2f7766633f5a18063cfc52f72ddd717c4a74fc874ef598dc0a3173e1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-785943,Uid:73d382d74d04a571b3aff80808df5c97,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716211799418322832,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d382d74d04a571b3aff80808df5c97,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.63:8443,kubernetes.io/config.hash: 73d382d74d04a571b3aff80808df5c97,kubernetes.io/config.seen: 2024-05-20T13:28:58.012628438Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2705b4844efa913da1dfeda7e4f7e85d7dcf3adee8cb7342d6de3897bd7682bf,Metadata:&PodSandbo
xMetadata{Name:kube-proxy-l865c,Uid:1b52ab72-287d-496e-9a1a-24789bb86ba9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716211799410792183,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l865c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b52ab72-287d-496e-9a1a-24789bb86ba9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T13:29:16.910586902Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c5d9302e-cbaf-48dc-bf61-580928b59e97 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.921746831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=914c2c5c-2fc0-46b9-8a1a-190a4318d570 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.921819118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=914c2c5c-2fc0-46b9-8a1a-190a4318d570 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.922348733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a182ed5fdcccf86bcc863e66a8293ec759a3f3f5d2eaa476ab9150931674ab7f,PodSandboxId:a457973b9f34bfd62476cc6c121040810d41af9a722f7c3fcc966cbe51fc3224,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211817181453067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3430ead-92fe-47c3-b8ac-0b4fcb673845,},Annotations:map[string]string{io.kubernetes.container.hash: 49609427,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e1e4f94c3efed6d4b5d6a42706e55950d155a2efb921602ed2e63f32e8a9f2d,PodSandboxId:94b14f32a37326069f97a72ca23765f3f4c00136c693903714b360e7e2635887,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211813344629670,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e4026a40fd6d5fdf695f8e4a5bd0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 379da08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc95d6da4ab5753f62dce158a57e829125c5eefcb3834981827b5d68c0ca612a,PodSandboxId:554ce10e2f7766633f5a18063cfc52f72ddd717c4a74fc874ef598dc0a3173e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211813338763794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d382d74d04a571b3aff80808df5c97,},Annotations:map[string]string{io.kubernetes.container.hash: 147d16dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defdbe06ace968d791cba3fa608905910a490590d152d0d3c3279b4716ae3590,PodSandboxId:ca4f908da58f51ed1eeac6dbb39f4211f8e7ad7907f8779b87c4ea6367300372,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211801274614264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5mrq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e72b900a-c0a1-451d-b469-04b75f9010a4,},Annotations:map[string]string{io.kubernetes.container.hash: 910e5eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0af42df609c00a089a4a6701abe4cfca2d7ad08d6cc5841981e6525b19bec95,PodSandboxId:66b900a4498f23c250f3f187b0702127303341147da082c6a1af74e63adf2a8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211801219045092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ssqvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b40a0d-02aa-48bc-a780-5613a2757e
a7,},Annotations:map[string]string{io.kubernetes.container.hash: a26182e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d619c81f40384b03da113ac26c57da4ca746215c6a1027ba9e05ac4436644c44,PodSandboxId:2705b4844efa913da1dfeda7e4f7e85d7dcf3adee8cb7342d6de3897bd7682bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211799883541591,Labels:map[s
tring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l865c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b52ab72-287d-496e-9a1a-24789bb86ba9,},Annotations:map[string]string{io.kubernetes.container.hash: 476206cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b28f21ced0af52d956b474fec537e3723a7474e5f7a099bea3a06a699221cf1,PodSandboxId:a457973b9f34bfd62476cc6c121040810d41af9a722f7c3fcc966cbe51fc3224,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716211799962816468,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3430ead-92fe-47c3-b8ac-0b4fcb673845,},Annotations:map[string]string{io.kubernetes.container.hash: 49609427,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f574099752b9926352d5bfe3dda9ded33e5ea591ffdaf0355fbcd37178ef256c,PodSandboxId:634529428a1859d21dd1a71ada31bc5fb996e8156b5a13b9765ad1b6d234fbd9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211800204173777,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa324a0f157f091979189a0b05ed16e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85db1a83907f4360e4ab17e2b7a1e0eb8945b4f5fd97d2c16b08e1e1882ce77,PodSandboxId:77be6f71833563666755e01f87fdd5e7f36c2c01606c1b1b9600567df4319af6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211800156288092,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5833fa04724896a20990cfdd2567a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f3ec1030687f02bbfaee8f0f815098c06594f9bc8338445762b9568a0d01d,PodSandboxId:94b14f32a37326069f97a72ca23765f3f4c00136c693903714b360e7e2635887,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211799882653226,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e4026a40fd6d5fdf695f8e4a5bd0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 379da08a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02356004940dd166f1a40abef64f938ece208ecbe0bbcc5bf4b4241ec58a67df,PodSandboxId:554ce10e2f7766633f5a18063cfc52f72ddd717c4a74fc874ef598dc0a3173e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211799784301688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d382d74d04a571b3aff80808df5c97,},Annotations:map[string]string{io.kubernetes.container.hash: 147d16dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c1a30091fb768ec070845322187c7422b69ec9e9ebc8d15069ff6e4766c403,PodSandboxId:83fa8bb5384799c37a1e03ea3dc5d318ccb2203ff4555584d1ba5f0738e46a74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211758297662753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name:
coredns-7db6d8ff4d-5mrq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e72b900a-c0a1-451d-b469-04b75f9010a4,},Annotations:map[string]string{io.kubernetes.container.hash: 910e5eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3197d84ffd13821b58666f58f0845db4efb8e3740f4413fea656d3cfa018c419,PodSandboxId:ccf5351b6ba9c77f22a43028851b32f2616c6e841b80cfd8f642f55869c444c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410
dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211758109001819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ssqvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b40a0d-02aa-48bc-a780-5613a2757ea7,},Annotations:map[string]string{io.kubernetes.container.hash: a26182e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:934914d4fe7faf9c77fd531ae881cce7a88580f5a6a835001c8b9d83c150cce2,PodSandboxId:2f1f8cb3b757c078b4e8c9392d76fa1b7c0b9dab20350ea4bf23cf50fc8c03da,Metadata:&ContainerMetadata{Name:kube-
proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211757967319614,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l865c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b52ab72-287d-496e-9a1a-24789bb86ba9,},Annotations:map[string]string{io.kubernetes.container.hash: 476206cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ed9c429d472374c45a6e3199eff8b3d21a4213a184a807243cf8689123d64,PodSandboxId:ccf9f902d8da4f64109dc0ae9b23e0937ee8440d2b7600d01dab94d76ad460fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Image
Spec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211738781540983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa324a0f157f091979189a0b05ed16e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5bfedd3d47555e0e06c9a057ad4703dd6058bfa34c1a2309519082556afcb6,PodSandboxId:273c9b470b77158d7649eea2c40655e5f0fd309d19a1493d81956fadd99ddbaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Im
ageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211738719675341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5833fa04724896a20990cfdd2567a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=914c2c5c-2fc0-46b9-8a1a-190a4318d570 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.923147209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2bd49e6-14fd-4b5c-80f3-38decb705657 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.923284133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2bd49e6-14fd-4b5c-80f3-38decb705657 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:20 kubernetes-upgrade-785943 crio[2263]: time="2024-05-20 13:30:20.923795046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a182ed5fdcccf86bcc863e66a8293ec759a3f3f5d2eaa476ab9150931674ab7f,PodSandboxId:a457973b9f34bfd62476cc6c121040810d41af9a722f7c3fcc966cbe51fc3224,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716211817181453067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3430ead-92fe-47c3-b8ac-0b4fcb673845,},Annotations:map[string]string{io.kubernetes.container.hash: 49609427,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e1e4f94c3efed6d4b5d6a42706e55950d155a2efb921602ed2e63f32e8a9f2d,PodSandboxId:94b14f32a37326069f97a72ca23765f3f4c00136c693903714b360e7e2635887,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211813344629670,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e4026a40fd6d5fdf695f8e4a5bd0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 379da08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc95d6da4ab5753f62dce158a57e829125c5eefcb3834981827b5d68c0ca612a,PodSandboxId:554ce10e2f7766633f5a18063cfc52f72ddd717c4a74fc874ef598dc0a3173e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211813338763794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d382d74d04a571b3aff80808df5c97,},Annotations:map[string]string{io.kubernetes.container.hash: 147d16dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defdbe06ace968d791cba3fa608905910a490590d152d0d3c3279b4716ae3590,PodSandboxId:ca4f908da58f51ed1eeac6dbb39f4211f8e7ad7907f8779b87c4ea6367300372,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211801274614264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5mrq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e72b900a-c0a1-451d-b469-04b75f9010a4,},Annotations:map[string]string{io.kubernetes.container.hash: 910e5eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0af42df609c00a089a4a6701abe4cfca2d7ad08d6cc5841981e6525b19bec95,PodSandboxId:66b900a4498f23c250f3f187b0702127303341147da082c6a1af74e63adf2a8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211801219045092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ssqvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b40a0d-02aa-48bc-a780-5613a2757e
a7,},Annotations:map[string]string{io.kubernetes.container.hash: a26182e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d619c81f40384b03da113ac26c57da4ca746215c6a1027ba9e05ac4436644c44,PodSandboxId:2705b4844efa913da1dfeda7e4f7e85d7dcf3adee8cb7342d6de3897bd7682bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211799883541591,Labels:map[s
tring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l865c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b52ab72-287d-496e-9a1a-24789bb86ba9,},Annotations:map[string]string{io.kubernetes.container.hash: 476206cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f574099752b9926352d5bfe3dda9ded33e5ea591ffdaf0355fbcd37178ef256c,PodSandboxId:634529428a1859d21dd1a71ada31bc5fb996e8156b5a13b9765ad1b6d234fbd9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211800204173777,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa324a0f157f091979189a0b05ed16e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85db1a83907f4360e4ab17e2b7a1e0eb8945b4f5fd97d2c16b08e1e1882ce77,PodSandboxId:77be6f71833563666755e01f87fdd5e7f36c2c01606c1b1b9600567df4319af6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211800156288092,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-785943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5833fa04724896a20990cfdd2567a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2bd49e6-14fd-4b5c-80f3-38decb705657 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a182ed5fdcccf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   a457973b9f34b       storage-provisioner
	3e1e4f94c3efe       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago        Running             etcd                      2                   94b14f32a3732       etcd-kubernetes-upgrade-785943
	cc95d6da4ab57       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   7 seconds ago        Running             kube-apiserver            2                   554ce10e2f776       kube-apiserver-kubernetes-upgrade-785943
	defdbe06ace96       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago       Running             coredns                   1                   ca4f908da58f5       coredns-7db6d8ff4d-5mrq5
	d0af42df609c0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago       Running             coredns                   1                   66b900a4498f2       coredns-7db6d8ff4d-ssqvj
	f574099752b99       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   20 seconds ago       Running             kube-scheduler            1                   634529428a185       kube-scheduler-kubernetes-upgrade-785943
	c85db1a83907f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   20 seconds ago       Running             kube-controller-manager   1                   77be6f7183356       kube-controller-manager-kubernetes-upgrade-785943
	7b28f21ced0af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago       Exited              storage-provisioner       1                   a457973b9f34b       storage-provisioner
	d619c81f40384       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   21 seconds ago       Running             kube-proxy                1                   2705b4844efa9       kube-proxy-l865c
	be9f3ec103068       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago       Exited              etcd                      1                   94b14f32a3732       etcd-kubernetes-upgrade-785943
	02356004940dd       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   21 seconds ago       Exited              kube-apiserver            1                   554ce10e2f776       kube-apiserver-kubernetes-upgrade-785943
	e5c1a30091fb7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   83fa8bb538479       coredns-7db6d8ff4d-5mrq5
	3197d84ffd138       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   ccf5351b6ba9c       coredns-7db6d8ff4d-ssqvj
	934914d4fe7fa       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   About a minute ago   Exited              kube-proxy                0                   2f1f8cb3b757c       kube-proxy-l865c
	856ed9c429d47       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   About a minute ago   Exited              kube-scheduler            0                   ccf9f902d8da4       kube-scheduler-kubernetes-upgrade-785943
	bf5bfedd3d475       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   About a minute ago   Exited              kube-controller-manager   0                   273c9b470b771       kube-controller-manager-kubernetes-upgrade-785943
	
	
	==> coredns [3197d84ffd13821b58666f58f0845db4efb8e3740f4413fea656d3cfa018c419] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1481197140]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:29:18.451) (total time: 27784ms):
	Trace[1481197140]: [27.784269089s] [27.784269089s] END
	[INFO] plugin/kubernetes: Trace[734968553]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:29:18.451) (total time: 27783ms):
	Trace[734968553]: [27.783729217s] [27.783729217s] END
	[INFO] plugin/kubernetes: Trace[1127055208]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:29:18.452) (total time: 27768ms):
	Trace[1127055208]: [27.768948235s] [27.768948235s] END
	
	
	==> coredns [d0af42df609c00a089a4a6701abe4cfca2d7ad08d6cc5841981e6525b19bec95] <==
	[INFO] plugin/kubernetes: Trace[1611872136]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:30:01.872) (total time: 10142ms):
	Trace[1611872136]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49614->10.96.0.1:443: read: connection reset by peer 10141ms (13:30:12.014)
	Trace[1611872136]: [10.142206165s] [10.142206165s] END
	[INFO] plugin/kubernetes: Trace[1654045378]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:30:01.860) (total time: 10153ms):
	Trace[1654045378]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49600->10.96.0.1:443: read: connection reset by peer 10153ms (13:30:12.014)
	Trace[1654045378]: [10.153697125s] [10.153697125s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49614->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49600->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49624->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1427384216]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:30:01.872) (total time: 10142ms):
	Trace[1427384216]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49624->10.96.0.1:443: read: connection reset by peer 10142ms (13:30:12.014)
	Trace[1427384216]: [10.142078624s] [10.142078624s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49624->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found]
	
	
	==> coredns [defdbe06ace968d791cba3fa608905910a490590d152d0d3c3279b4716ae3590] <==
	Trace[1501319244]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40538->10.96.0.1:443: read: connection reset by peer 10072ms (13:30:12.013)
	Trace[1501319244]: [10.073803217s] [10.073803217s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40538->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40552->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[560379025]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:30:01.948) (total time: 10066ms):
	Trace[560379025]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40552->10.96.0.1:443: read: connection reset by peer 10066ms (13:30:12.014)
	Trace[560379025]: [10.066064802s] [10.066064802s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40552->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40566->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[116961075]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:30:01.949) (total time: 10065ms):
	Trace[116961075]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40566->10.96.0.1:443: read: connection reset by peer 10064ms (13:30:12.013)
	Trace[116961075]: [10.065741837s] [10.065741837s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40566->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	
	
	==> coredns [e5c1a30091fb768ec070845322187c7422b69ec9e9ebc8d15069ff6e4766c403] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[89123877]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:29:18.452) (total time: 27771ms):
	Trace[89123877]: [27.771485796s] [27.771485796s] END
	[INFO] plugin/kubernetes: Trace[955477894]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:29:18.452) (total time: 27772ms):
	Trace[955477894]: [27.772404574s] [27.772404574s] END
	[INFO] plugin/kubernetes: Trace[406536535]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 13:29:18.451) (total time: 27773ms):
	Trace[406536535]: [27.773301787s] [27.773301787s] END
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-785943
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-785943
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:29:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-785943
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:30:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:30:17 +0000   Mon, 20 May 2024 13:28:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:30:17 +0000   Mon, 20 May 2024 13:28:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:30:17 +0000   Mon, 20 May 2024 13:28:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:30:17 +0000   Mon, 20 May 2024 13:29:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.63
	  Hostname:    kubernetes-upgrade-785943
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7adc6bf00aa7457589a7f033df524f32
	  System UUID:                7adc6bf0-0aa7-4575-89a7-f033df524f32
	  Boot ID:                    9b871fa7-7819-4067-86c9-40c9cc45f5e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5mrq5                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 coredns-7db6d8ff4d-ssqvj                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 etcd-kubernetes-upgrade-785943                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-kubernetes-upgrade-785943             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-785943    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-l865c                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-kubernetes-upgrade-785943             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node kubernetes-upgrade-785943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node kubernetes-upgrade-785943 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node kubernetes-upgrade-785943 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                node-controller  Node kubernetes-upgrade-785943 event: Registered Node kubernetes-upgrade-785943 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8s (x8 over 9s)    kubelet          Node kubernetes-upgrade-785943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 9s)    kubelet          Node kubernetes-upgrade-785943 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)    kubelet          Node kubernetes-upgrade-785943 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.273615] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.071202] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067844] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.219449] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.129536] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.290720] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +4.590434] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[  +0.057360] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.882342] systemd-fstab-generator[859]: Ignoring "noauto" option for root device
	[May20 13:29] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.093853] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.138891] kauditd_printk_skb: 21 callbacks suppressed
	[ +35.257334] systemd-fstab-generator[2177]: Ignoring "noauto" option for root device
	[  +0.122090] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.068651] systemd-fstab-generator[2189]: Ignoring "noauto" option for root device
	[  +0.177477] systemd-fstab-generator[2203]: Ignoring "noauto" option for root device
	[  +0.157284] systemd-fstab-generator[2215]: Ignoring "noauto" option for root device
	[  +0.300732] systemd-fstab-generator[2243]: Ignoring "noauto" option for root device
	[  +4.935408] systemd-fstab-generator[2401]: Ignoring "noauto" option for root device
	[  +0.084454] kauditd_printk_skb: 100 callbacks suppressed
	[May20 13:30] systemd-fstab-generator[3327]: Ignoring "noauto" option for root device
	[  +0.102348] kauditd_printk_skb: 119 callbacks suppressed
	[  +6.209936] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.119455] kauditd_printk_skb: 34 callbacks suppressed
	
	
	==> etcd [3e1e4f94c3efed6d4b5d6a42706e55950d155a2efb921602ed2e63f32e8a9f2d] <==
	{"level":"info","ts":"2024-05-20T13:30:13.57513Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:13.57515Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:13.576327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 switched to configuration voters=(8799934109536324694)"}
	{"level":"info","ts":"2024-05-20T13:30:13.576431Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"77c04c1230f4f4e2","local-member-id":"7a1fa572d5c18c56","added-peer-id":"7a1fa572d5c18c56","added-peer-peer-urls":["https://192.168.50.63:2380"]}
	{"level":"info","ts":"2024-05-20T13:30:13.576582Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"77c04c1230f4f4e2","local-member-id":"7a1fa572d5c18c56","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:30:13.576641Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:30:13.582576Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:30:13.58265Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.63:2380"}
	{"level":"info","ts":"2024-05-20T13:30:13.582821Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.63:2380"}
	{"level":"info","ts":"2024-05-20T13:30:13.584375Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7a1fa572d5c18c56","initial-advertise-peer-urls":["https://192.168.50.63:2380"],"listen-peer-urls":["https://192.168.50.63:2380"],"advertise-client-urls":["https://192.168.50.63:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.63:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T13:30:13.584496Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T13:30:15.259269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:15.25942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:15.259487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 received MsgPreVoteResp from 7a1fa572d5c18c56 at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:15.259527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:15.259551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 received MsgVoteResp from 7a1fa572d5c18c56 at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:15.259578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:15.259606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a1fa572d5c18c56 elected leader 7a1fa572d5c18c56 at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:15.265752Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7a1fa572d5c18c56","local-member-attributes":"{Name:kubernetes-upgrade-785943 ClientURLs:[https://192.168.50.63:2379]}","request-path":"/0/members/7a1fa572d5c18c56/attributes","cluster-id":"77c04c1230f4f4e2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T13:30:15.265831Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:30:15.265995Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:30:15.266034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T13:30:15.266051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:30:15.268056Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T13:30:15.268443Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.63:2379"}
	
	
	==> etcd [be9f3ec1030687f02bbfaee8f0f815098c06594f9bc8338445762b9568a0d01d] <==
	{"level":"info","ts":"2024-05-20T13:30:00.536256Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"102.288438ms"}
	{"level":"info","ts":"2024-05-20T13:30:00.610394Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-05-20T13:30:00.63155Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"77c04c1230f4f4e2","local-member-id":"7a1fa572d5c18c56","commit-index":429}
	{"level":"info","ts":"2024-05-20T13:30:00.631663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 switched to configuration voters=()"}
	{"level":"info","ts":"2024-05-20T13:30:00.631691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 became follower at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:00.631707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7a1fa572d5c18c56 [peers: [], term: 2, commit: 429, applied: 0, lastindex: 429, lastterm: 2]"}
	{"level":"warn","ts":"2024-05-20T13:30:00.633562Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-05-20T13:30:00.65476Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":413}
	{"level":"info","ts":"2024-05-20T13:30:00.670175Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-05-20T13:30:00.675859Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"7a1fa572d5c18c56","timeout":"7s"}
	{"level":"info","ts":"2024-05-20T13:30:00.676168Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"7a1fa572d5c18c56"}
	{"level":"info","ts":"2024-05-20T13:30:00.676207Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"7a1fa572d5c18c56","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-20T13:30:00.680313Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-20T13:30:00.680502Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:00.680556Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:00.680584Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:00.680869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1fa572d5c18c56 switched to configuration voters=(8799934109536324694)"}
	{"level":"info","ts":"2024-05-20T13:30:00.680969Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"77c04c1230f4f4e2","local-member-id":"7a1fa572d5c18c56","added-peer-id":"7a1fa572d5c18c56","added-peer-peer-urls":["https://192.168.50.63:2380"]}
	{"level":"info","ts":"2024-05-20T13:30:00.681134Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"77c04c1230f4f4e2","local-member-id":"7a1fa572d5c18c56","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:30:00.681165Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:30:00.687394Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:30:00.687604Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7a1fa572d5c18c56","initial-advertise-peer-urls":["https://192.168.50.63:2380"],"listen-peer-urls":["https://192.168.50.63:2380"],"advertise-client-urls":["https://192.168.50.63:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.63:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T13:30:00.687639Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T13:30:00.687775Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.63:2380"}
	{"level":"info","ts":"2024-05-20T13:30:00.687784Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.63:2380"}
	
	
	==> kernel <==
	 13:30:21 up 1 min,  0 users,  load average: 0.63, 0.29, 0.11
	Linux kubernetes-upgrade-785943 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [02356004940dd166f1a40abef64f938ece208ecbe0bbcc5bf4b4241ec58a67df] <==
	I0520 13:30:00.530408       1 options.go:221] external host was not specified, using 192.168.50.63
	I0520 13:30:00.557860       1 server.go:148] Version: v1.30.1
	I0520 13:30:00.557924       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:30:02.266157       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0520 13:30:02.267267       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:02.267514       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0520 13:30:02.268306       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:30:02.271502       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0520 13:30:02.271536       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0520 13:30:02.271685       1 instance.go:299] Using reconciler: lease
	W0520 13:30:02.274497       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:03.268781       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:03.268786       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:03.275524       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:04.815493       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:04.819890       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:04.884293       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:07.188945       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:07.454533       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 13:30:07.718687       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [cc95d6da4ab5753f62dce158a57e829125c5eefcb3834981827b5d68c0ca612a] <==
	I0520 13:30:16.803178       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0520 13:30:16.803360       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 13:30:16.806949       1 aggregator.go:165] initial CRD sync complete...
	I0520 13:30:16.806999       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 13:30:16.807010       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 13:30:16.881899       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 13:30:16.881986       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 13:30:16.882814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:30:16.898173       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 13:30:16.898682       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:30:16.899524       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 13:30:16.904684       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 13:30:16.908301       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:30:16.955150       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 13:30:16.955202       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:30:16.955214       1 policy_source.go:224] refreshing policies
	I0520 13:30:16.972562       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0520 13:30:17.003589       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 13:30:17.692850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 13:30:18.583511       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 13:30:18.594890       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 13:30:18.634357       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 13:30:18.766770       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 13:30:18.773411       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 13:30:19.745557       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bf5bfedd3d47555e0e06c9a057ad4703dd6058bfa34c1a2309519082556afcb6] <==
	I0520 13:29:16.485489       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0520 13:29:16.485616       1 shared_informer.go:320] Caches are synced for PV protection
	I0520 13:29:16.485892       1 shared_informer.go:320] Caches are synced for GC
	I0520 13:29:16.487158       1 shared_informer.go:320] Caches are synced for expand
	I0520 13:29:16.487303       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0520 13:29:16.488647       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 13:29:16.488653       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0520 13:29:16.489924       1 shared_informer.go:320] Caches are synced for job
	I0520 13:29:16.584660       1 shared_informer.go:320] Caches are synced for HPA
	I0520 13:29:16.585930       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0520 13:29:16.639212       1 shared_informer.go:320] Caches are synced for cronjob
	I0520 13:29:16.666964       1 shared_informer.go:320] Caches are synced for crt configmap
	I0520 13:29:16.673391       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0520 13:29:16.694138       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 13:29:16.711597       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 13:29:17.106851       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 13:29:17.129202       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 13:29:17.129327       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 13:29:17.430135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="335.149046ms"
	I0520 13:29:17.461551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.353917ms"
	I0520 13:29:17.461787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.86µs"
	I0520 13:29:17.462009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.722µs"
	I0520 13:29:17.476410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.393µs"
	I0520 13:29:19.251984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.469µs"
	I0520 13:29:19.271402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.787µs"
	
	
	==> kube-controller-manager [c85db1a83907f4360e4ab17e2b7a1e0eb8945b4f5fd97d2c16b08e1e1882ce77] <==
	I0520 13:30:18.038835       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0520 13:30:18.040912       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0520 13:30:18.041037       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0520 13:30:18.043403       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0520 13:30:18.043708       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0520 13:30:18.043754       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0520 13:30:18.043780       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0520 13:30:18.046186       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0520 13:30:18.046436       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0520 13:30:18.047005       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0520 13:30:18.048542       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0520 13:30:18.048628       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0520 13:30:18.048701       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0520 13:30:18.048899       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0520 13:30:18.048978       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0520 13:30:18.051489       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0520 13:30:18.051787       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0520 13:30:18.052383       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0520 13:30:18.054009       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0520 13:30:18.054251       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0520 13:30:18.054330       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0520 13:30:18.056373       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0520 13:30:18.056666       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0520 13:30:18.057254       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0520 13:30:18.120030       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [934914d4fe7faf9c77fd531ae881cce7a88580f5a6a835001c8b9d83c150cce2] <==
	I0520 13:29:18.434855       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:29:18.454366       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.63"]
	I0520 13:29:18.491773       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:29:18.491822       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:29:18.491839       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:29:18.494696       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:29:18.494879       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:29:18.494906       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:29:18.496456       1 config.go:192] "Starting service config controller"
	I0520 13:29:18.496492       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:29:18.496512       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:29:18.496515       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:29:18.498690       1 config.go:319] "Starting node config controller"
	I0520 13:29:18.498723       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:29:18.597273       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:29:18.597338       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:29:18.600671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d619c81f40384b03da113ac26c57da4ca746215c6a1027ba9e05ac4436644c44] <==
	I0520 13:30:02.321211       1 server_linux.go:69] "Using iptables proxy"
	E0520 13:30:12.015636       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-785943\": dial tcp 192.168.50.63:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.63:58162->192.168.50.63:8443: read: connection reset by peer"
	E0520 13:30:13.066011       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-785943\": dial tcp 192.168.50.63:8443: connect: connection refused"
	I0520 13:30:16.879237       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.63"]
	I0520 13:30:16.976533       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:30:16.976622       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:30:16.976645       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:30:16.988170       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:30:16.988496       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:30:16.988587       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:30:16.992400       1 config.go:192] "Starting service config controller"
	I0520 13:30:16.993227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:30:16.993328       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:30:16.993335       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:30:16.995414       1 config.go:319] "Starting node config controller"
	I0520 13:30:16.995444       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:30:17.094423       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:30:17.094810       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:30:17.101379       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [856ed9c429d472374c45a6e3199eff8b3d21a4213a184a807243cf8689123d64] <==
	E0520 13:29:02.156606       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 13:29:02.190984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:29:02.191047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:29:02.382463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 13:29:02.382497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 13:29:02.414344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 13:29:02.414411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 13:29:02.419603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 13:29:02.419758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 13:29:02.425372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:29:02.425413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:29:02.436010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:29:02.436051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 13:29:02.442971       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 13:29:02.443021       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 13:29:02.563388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 13:29:02.563440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 13:29:02.580122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 13:29:02.580166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 13:29:02.599137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:29:02.599225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 13:29:02.621394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 13:29:02.621478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0520 13:29:04.906840       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 13:29:46.216832       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f574099752b9926352d5bfe3dda9ded33e5ea591ffdaf0355fbcd37178ef256c] <==
	W0520 13:30:16.839587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 13:30:16.839621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 13:30:16.839717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 13:30:16.839757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 13:30:16.839816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 13:30:16.839850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 13:30:16.839885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 13:30:16.839919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 13:30:16.839948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 13:30:16.839976       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 13:30:16.840010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 13:30:16.840046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 13:30:16.844404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 13:30:16.844531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 13:30:16.844665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 13:30:16.844673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 13:30:16.844784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 13:30:16.844803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 13:30:16.844964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 13:30:16.845118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 13:30:16.845238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 13:30:16.845245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 13:30:16.845370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 13:30:16.845377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 13:30:16.845485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	
	
	==> kubelet <==
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:13.098607    3334 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f8e4026a40fd6d5fdf695f8e4a5bd0f7-etcd-data\") pod \"etcd-kubernetes-upgrade-785943\" (UID: \"f8e4026a40fd6d5fdf695f8e4a5bd0f7\") " pod="kube-system/etcd-kubernetes-upgrade-785943"
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:13.098631    3334 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73d382d74d04a571b3aff80808df5c97-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-785943\" (UID: \"73d382d74d04a571b3aff80808df5c97\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-785943"
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:13.098653    3334 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73d382d74d04a571b3aff80808df5c97-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-785943\" (UID: \"73d382d74d04a571b3aff80808df5c97\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-785943"
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:13.182940    3334 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-785943"
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: E0520 13:30:13.183800    3334 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.63:8443: connect: connection refused" node="kubernetes-upgrade-785943"
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:13.322367    3334 scope.go:117] "RemoveContainer" containerID="be9f3ec1030687f02bbfaee8f0f815098c06594f9bc8338445762b9568a0d01d"
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:13.322652    3334 scope.go:117] "RemoveContainer" containerID="02356004940dd166f1a40abef64f938ece208ecbe0bbcc5bf4b4241ec58a67df"
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: E0520 13:30:13.497463    3334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-785943?timeout=10s\": dial tcp 192.168.50.63:8443: connect: connection refused" interval="800ms"
	May 20 13:30:13 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:13.585523    3334 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-785943"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.858452    3334 apiserver.go:52] "Watching apiserver"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.867476    3334 topology_manager.go:215] "Topology Admit Handler" podUID="e3430ead-92fe-47c3-b8ac-0b4fcb673845" podNamespace="kube-system" podName="storage-provisioner"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.867797    3334 topology_manager.go:215] "Topology Admit Handler" podUID="1b52ab72-287d-496e-9a1a-24789bb86ba9" podNamespace="kube-system" podName="kube-proxy-l865c"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.868027    3334 topology_manager.go:215] "Topology Admit Handler" podUID="53b40a0d-02aa-48bc-a780-5613a2757ea7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ssqvj"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.868182    3334 topology_manager.go:215] "Topology Admit Handler" podUID="e72b900a-c0a1-451d-b469-04b75f9010a4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5mrq5"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.891172    3334 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.922232    3334 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b52ab72-287d-496e-9a1a-24789bb86ba9-xtables-lock\") pod \"kube-proxy-l865c\" (UID: \"1b52ab72-287d-496e-9a1a-24789bb86ba9\") " pod="kube-system/kube-proxy-l865c"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.922300    3334 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b52ab72-287d-496e-9a1a-24789bb86ba9-lib-modules\") pod \"kube-proxy-l865c\" (UID: \"1b52ab72-287d-496e-9a1a-24789bb86ba9\") " pod="kube-system/kube-proxy-l865c"
	May 20 13:30:16 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:16.922371    3334 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e3430ead-92fe-47c3-b8ac-0b4fcb673845-tmp\") pod \"storage-provisioner\" (UID: \"e3430ead-92fe-47c3-b8ac-0b4fcb673845\") " pod="kube-system/storage-provisioner"
	May 20 13:30:17 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:17.074392    3334 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-785943"
	May 20 13:30:17 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:17.074698    3334 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-785943"
	May 20 13:30:17 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:17.093827    3334 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 13:30:17 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:17.096168    3334 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 13:30:17 kubernetes-upgrade-785943 kubelet[3334]: E0520 13:30:17.132342    3334 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-785943\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-785943"
	May 20 13:30:17 kubernetes-upgrade-785943 kubelet[3334]: E0520 13:30:17.139438    3334 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-785943\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-785943"
	May 20 13:30:17 kubernetes-upgrade-785943 kubelet[3334]: I0520 13:30:17.171119    3334 scope.go:117] "RemoveContainer" containerID="7b28f21ced0af52d956b474fec537e3723a7474e5f7a099bea3a06a699221cf1"
	
	
	==> storage-provisioner [7b28f21ced0af52d956b474fec537e3723a7474e5f7a099bea3a06a699221cf1] <==
	I0520 13:30:01.813328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0520 13:30:12.016239       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a182ed5fdcccf86bcc863e66a8293ec759a3f3f5d2eaa476ab9150931674ab7f] <==
	I0520 13:30:17.291329       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 13:30:17.308532       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 13:30:17.308599       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 13:30:20.347693  906783 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18932-852915/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-785943 -n kubernetes-upgrade-785943
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-785943 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-785943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-785943
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-785943: (1.165306847s)
--- FAIL: TestKubernetesUpgrade (414.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-587544 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-587544 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.982243298s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-587544] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-587544" primary control-plane node in "pause-587544" cluster
	* Updating the running kvm2 "pause-587544" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-587544" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:29:51.967602  906496 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:29:51.967740  906496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:29:51.967751  906496 out.go:304] Setting ErrFile to fd 2...
	I0520 13:29:51.967758  906496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:29:51.968054  906496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:29:51.968810  906496 out.go:298] Setting JSON to false
	I0520 13:29:51.970339  906496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11540,"bootTime":1716200252,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:29:51.970424  906496 start.go:139] virtualization: kvm guest
	I0520 13:29:52.034631  906496 out.go:177] * [pause-587544] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:29:52.203527  906496 notify.go:220] Checking for updates...
	I0520 13:29:52.296412  906496 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 13:29:52.431928  906496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:29:52.690297  906496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:29:52.850798  906496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:29:53.004996  906496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:29:53.159571  906496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:29:53.264599  906496 config.go:182] Loaded profile config "pause-587544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:53.265207  906496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:53.265274  906496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:53.281584  906496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0520 13:29:53.282146  906496 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:53.282795  906496 main.go:141] libmachine: Using API Version  1
	I0520 13:29:53.282823  906496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:53.283185  906496 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:53.283422  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:29:53.283709  906496 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:29:53.284111  906496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:53.284159  906496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:53.298952  906496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0520 13:29:53.299611  906496 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:53.300321  906496 main.go:141] libmachine: Using API Version  1
	I0520 13:29:53.300356  906496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:53.300850  906496 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:53.301128  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:29:53.460265  906496 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 13:29:53.610163  906496 start.go:297] selected driver: kvm2
	I0520 13:29:53.610216  906496 start.go:901] validating driver "kvm2" against &{Name:pause-587544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.1 ClusterName:pause-587544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:29:53.610430  906496 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:29:53.610967  906496 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:29:53.611087  906496 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:29:53.631881  906496 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:29:53.632849  906496 cni.go:84] Creating CNI manager for ""
	I0520 13:29:53.632868  906496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:29:53.632945  906496 start.go:340] cluster config:
	{Name:pause-587544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-587544 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:29:53.633146  906496 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:29:53.835172  906496 out.go:177] * Starting "pause-587544" primary control-plane node in "pause-587544" cluster
	I0520 13:29:53.912892  906496 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:29:53.913000  906496 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:29:53.913034  906496 cache.go:56] Caching tarball of preloaded images
	I0520 13:29:53.913153  906496 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:29:53.913176  906496 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:29:53.913297  906496 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/config.json ...
	I0520 13:29:53.962414  906496 start.go:360] acquireMachinesLock for pause-587544: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:29:53.962543  906496 start.go:364] duration metric: took 57.938µs to acquireMachinesLock for "pause-587544"
	I0520 13:29:53.962567  906496 start.go:96] Skipping create...Using existing machine configuration
	I0520 13:29:53.962581  906496 fix.go:54] fixHost starting: 
	I0520 13:29:53.963044  906496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:29:53.963095  906496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:29:53.982907  906496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0520 13:29:53.983349  906496 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:29:53.983918  906496 main.go:141] libmachine: Using API Version  1
	I0520 13:29:53.983941  906496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:29:53.984341  906496 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:29:53.984577  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:29:53.984780  906496 main.go:141] libmachine: (pause-587544) Calling .GetState
	I0520 13:29:53.986673  906496 fix.go:112] recreateIfNeeded on pause-587544: state=Running err=<nil>
	W0520 13:29:53.986700  906496 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 13:29:54.119490  906496 out.go:177] * Updating the running kvm2 "pause-587544" VM ...
	I0520 13:29:54.150213  906496 machine.go:94] provisionDockerMachine start ...
	I0520 13:29:54.150288  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:29:54.150676  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.154407  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.154829  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.154872  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.155088  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.155243  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.155448  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.155600  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.155796  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:54.156031  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:29:54.156051  906496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 13:29:54.268832  906496 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-587544
	
	I0520 13:29:54.268867  906496 main.go:141] libmachine: (pause-587544) Calling .GetMachineName
	I0520 13:29:54.269106  906496 buildroot.go:166] provisioning hostname "pause-587544"
	I0520 13:29:54.269125  906496 main.go:141] libmachine: (pause-587544) Calling .GetMachineName
	I0520 13:29:54.269344  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.272130  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.272497  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.272528  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.272637  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.272836  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.273011  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.273175  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.273330  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:54.273513  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:29:54.273530  906496 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-587544 && echo "pause-587544" | sudo tee /etc/hostname
	I0520 13:29:54.401420  906496 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-587544
	
	I0520 13:29:54.401453  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.404565  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.404917  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.404949  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.405139  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.405343  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.405496  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.405638  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.405814  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:54.406015  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:29:54.406040  906496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-587544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-587544/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-587544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 13:29:54.513394  906496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 13:29:54.513432  906496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18932-852915/.minikube CaCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18932-852915/.minikube}
	I0520 13:29:54.513455  906496 buildroot.go:174] setting up certificates
	I0520 13:29:54.513466  906496 provision.go:84] configureAuth start
	I0520 13:29:54.513480  906496 main.go:141] libmachine: (pause-587544) Calling .GetMachineName
	I0520 13:29:54.513807  906496 main.go:141] libmachine: (pause-587544) Calling .GetIP
	I0520 13:29:54.516623  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.517058  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.517085  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.517239  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.519760  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.520189  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.520224  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.520296  906496 provision.go:143] copyHostCerts
	I0520 13:29:54.520356  906496 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem, removing ...
	I0520 13:29:54.520369  906496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem
	I0520 13:29:54.520432  906496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/ca.pem (1078 bytes)
	I0520 13:29:54.520556  906496 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem, removing ...
	I0520 13:29:54.520569  906496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem
	I0520 13:29:54.520596  906496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/cert.pem (1123 bytes)
	I0520 13:29:54.520671  906496 exec_runner.go:144] found /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem, removing ...
	I0520 13:29:54.520682  906496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem
	I0520 13:29:54.520707  906496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18932-852915/.minikube/key.pem (1675 bytes)
	I0520 13:29:54.520764  906496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem org=jenkins.pause-587544 san=[127.0.0.1 192.168.61.6 localhost minikube pause-587544]
	I0520 13:29:54.669459  906496 provision.go:177] copyRemoteCerts
	I0520 13:29:54.669523  906496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 13:29:54.669550  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.672464  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.672871  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.672902  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.673116  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.673305  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.673460  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.673574  906496 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/pause-587544/id_rsa Username:docker}
	I0520 13:29:54.758523  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0520 13:29:54.783719  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 13:29:54.809107  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 13:29:54.834989  906496 provision.go:87] duration metric: took 321.489994ms to configureAuth
	I0520 13:29:54.835028  906496 buildroot.go:189] setting minikube options for container-runtime
	I0520 13:29:54.835349  906496 config.go:182] Loaded profile config "pause-587544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:29:54.835500  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:29:54.838632  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.839084  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:29:54.839113  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:29:54.839311  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:29:54.839533  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.839818  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:29:54.839981  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:29:54.840205  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:29:54.840421  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:29:54.840440  906496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 13:30:00.503862  906496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 13:30:00.503895  906496 machine.go:97] duration metric: took 6.353629572s to provisionDockerMachine
	I0520 13:30:00.503911  906496 start.go:293] postStartSetup for "pause-587544" (driver="kvm2")
	I0520 13:30:00.503926  906496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 13:30:00.503948  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:30:00.504390  906496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 13:30:00.504429  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:30:00.507379  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.507700  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:30:00.507727  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.507886  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:30:00.508172  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:30:00.508370  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:30:00.508504  906496 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/pause-587544/id_rsa Username:docker}
	I0520 13:30:00.602603  906496 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 13:30:00.608559  906496 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 13:30:00.608595  906496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/addons for local assets ...
	I0520 13:30:00.608679  906496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18932-852915/.minikube/files for local assets ...
	I0520 13:30:00.608791  906496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem -> 8603342.pem in /etc/ssl/certs
	I0520 13:30:00.608931  906496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 13:30:00.622646  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:30:00.651460  906496 start.go:296] duration metric: took 147.530954ms for postStartSetup
	I0520 13:30:00.651512  906496 fix.go:56] duration metric: took 6.688936273s for fixHost
	I0520 13:30:00.651542  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:30:00.654498  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.654820  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:30:00.654874  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.655206  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:30:00.655451  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:30:00.655669  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:30:00.655860  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:30:00.656064  906496 main.go:141] libmachine: Using SSH client type: native
	I0520 13:30:00.656288  906496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0520 13:30:00.656304  906496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 13:30:00.768278  906496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716211800.761833604
	
	I0520 13:30:00.768312  906496 fix.go:216] guest clock: 1716211800.761833604
	I0520 13:30:00.768323  906496 fix.go:229] Guest: 2024-05-20 13:30:00.761833604 +0000 UTC Remote: 2024-05-20 13:30:00.651517989 +0000 UTC m=+8.734273380 (delta=110.315615ms)
	I0520 13:30:00.768352  906496 fix.go:200] guest clock delta is within tolerance: 110.315615ms
	I0520 13:30:00.768360  906496 start.go:83] releasing machines lock for "pause-587544", held for 6.805801865s
	I0520 13:30:00.768383  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:30:00.768713  906496 main.go:141] libmachine: (pause-587544) Calling .GetIP
	I0520 13:30:00.772145  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.772566  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:30:00.772593  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.772848  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:30:00.773471  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:30:00.773663  906496 main.go:141] libmachine: (pause-587544) Calling .DriverName
	I0520 13:30:00.773801  906496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 13:30:00.773854  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:30:00.774174  906496 ssh_runner.go:195] Run: cat /version.json
	I0520 13:30:00.774233  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHHostname
	I0520 13:30:00.777039  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.777375  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.777531  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:30:00.777558  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.777710  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:30:00.777743  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:30:00.777769  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:00.777946  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:30:00.778048  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHPort
	I0520 13:30:00.778168  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:30:00.778208  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHKeyPath
	I0520 13:30:00.778300  906496 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/pause-587544/id_rsa Username:docker}
	I0520 13:30:00.778362  906496 main.go:141] libmachine: (pause-587544) Calling .GetSSHUsername
	I0520 13:30:00.778528  906496 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/pause-587544/id_rsa Username:docker}
	W0520 13:30:00.889096  906496 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 13:30:00.889200  906496 ssh_runner.go:195] Run: systemctl --version
	I0520 13:30:00.897838  906496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 13:30:01.070035  906496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 13:30:01.078719  906496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 13:30:01.078794  906496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 13:30:01.089899  906496 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 13:30:01.089932  906496 start.go:494] detecting cgroup driver to use...
	I0520 13:30:01.090014  906496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 13:30:01.109269  906496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 13:30:01.130145  906496 docker.go:217] disabling cri-docker service (if available) ...
	I0520 13:30:01.130253  906496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 13:30:01.150876  906496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 13:30:01.171863  906496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 13:30:01.381332  906496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 13:30:01.588228  906496 docker.go:233] disabling docker service ...
	I0520 13:30:01.588346  906496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 13:30:01.609911  906496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 13:30:01.626127  906496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 13:30:01.780691  906496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 13:30:01.946352  906496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 13:30:01.969804  906496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 13:30:01.994512  906496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 13:30:01.994634  906496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:02.014341  906496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 13:30:02.014429  906496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:02.040322  906496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:02.052078  906496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:02.123732  906496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 13:30:02.214397  906496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:02.285606  906496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:02.365029  906496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 13:30:02.463775  906496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 13:30:02.532846  906496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 13:30:02.556750  906496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:30:02.930294  906496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 13:30:03.447643  906496 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 13:30:03.447728  906496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 13:30:03.453138  906496 start.go:562] Will wait 60s for crictl version
	I0520 13:30:03.453204  906496 ssh_runner.go:195] Run: which crictl
	I0520 13:30:03.458306  906496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 13:30:03.500016  906496 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 13:30:03.500111  906496 ssh_runner.go:195] Run: crio --version
	I0520 13:30:03.534085  906496 ssh_runner.go:195] Run: crio --version
	I0520 13:30:03.566953  906496 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 13:30:03.568232  906496 main.go:141] libmachine: (pause-587544) Calling .GetIP
	I0520 13:30:03.571226  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:03.571623  906496 main.go:141] libmachine: (pause-587544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ef:cc", ip: ""} in network mk-pause-587544: {Iface:virbr1 ExpiryTime:2024-05-20 14:29:04 +0000 UTC Type:0 Mac:52:54:00:49:ef:cc Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:pause-587544 Clientid:01:52:54:00:49:ef:cc}
	I0520 13:30:03.571667  906496 main.go:141] libmachine: (pause-587544) DBG | domain pause-587544 has defined IP address 192.168.61.6 and MAC address 52:54:00:49:ef:cc in network mk-pause-587544
	I0520 13:30:03.571861  906496 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0520 13:30:03.576452  906496 kubeadm.go:877] updating cluster {Name:pause-587544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:pause-587544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 13:30:03.576604  906496 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:30:03.576661  906496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:30:03.627771  906496 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:30:03.627804  906496 crio.go:433] Images already preloaded, skipping extraction
	I0520 13:30:03.627880  906496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 13:30:03.669986  906496 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 13:30:03.670015  906496 cache_images.go:84] Images are preloaded, skipping loading
	I0520 13:30:03.670025  906496 kubeadm.go:928] updating node { 192.168.61.6 8443 v1.30.1 crio true true} ...
	I0520 13:30:03.670150  906496 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-587544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:pause-587544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 13:30:03.670246  906496 ssh_runner.go:195] Run: crio config
	I0520 13:30:03.716867  906496 cni.go:84] Creating CNI manager for ""
	I0520 13:30:03.716896  906496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 13:30:03.716916  906496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 13:30:03.716939  906496 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.6 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-587544 NodeName:pause-587544 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 13:30:03.717105  906496 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-587544"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 13:30:03.717173  906496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 13:30:03.728009  906496 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 13:30:03.728079  906496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 13:30:03.737870  906496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0520 13:30:03.755568  906496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 13:30:03.774766  906496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0520 13:30:03.795041  906496 ssh_runner.go:195] Run: grep 192.168.61.6	control-plane.minikube.internal$ /etc/hosts
	I0520 13:30:03.799936  906496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:30:03.960381  906496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:30:03.979573  906496 certs.go:68] Setting up /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544 for IP: 192.168.61.6
	I0520 13:30:03.979602  906496 certs.go:194] generating shared ca certs ...
	I0520 13:30:03.979618  906496 certs.go:226] acquiring lock for ca certs: {Name:mk3eaac7961d2229d5e68b60744d742937ed2611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:03.979805  906496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key
	I0520 13:30:03.979866  906496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key
	I0520 13:30:03.979880  906496 certs.go:256] generating profile certs ...
	I0520 13:30:03.979983  906496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/client.key
	I0520 13:30:03.980081  906496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/apiserver.key.ec8af764
	I0520 13:30:03.980135  906496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/proxy-client.key
	I0520 13:30:03.980277  906496 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem (1338 bytes)
	W0520 13:30:03.980312  906496 certs.go:480] ignoring /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334_empty.pem, impossibly tiny 0 bytes
	I0520 13:30:03.980321  906496 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 13:30:03.980347  906496 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem (1078 bytes)
	I0520 13:30:03.980373  906496 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem (1123 bytes)
	I0520 13:30:03.980396  906496 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/certs/key.pem (1675 bytes)
	I0520 13:30:03.980431  906496 certs.go:484] found cert: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem (1708 bytes)
	I0520 13:30:03.981040  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 13:30:04.006505  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 13:30:04.030258  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 13:30:04.056009  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 13:30:04.082770  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 13:30:04.113646  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 13:30:04.204716  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 13:30:04.284304  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/pause-587544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 13:30:04.436163  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/ssl/certs/8603342.pem --> /usr/share/ca-certificates/8603342.pem (1708 bytes)
	I0520 13:30:04.499858  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 13:30:04.566036  906496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18932-852915/.minikube/certs/860334.pem --> /usr/share/ca-certificates/860334.pem (1338 bytes)
	I0520 13:30:04.590564  906496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 13:30:04.608846  906496 ssh_runner.go:195] Run: openssl version
	I0520 13:30:04.615994  906496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/860334.pem && ln -fs /usr/share/ca-certificates/860334.pem /etc/ssl/certs/860334.pem"
	I0520 13:30:04.626860  906496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/860334.pem
	I0520 13:30:04.632026  906496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 12:33 /usr/share/ca-certificates/860334.pem
	I0520 13:30:04.632078  906496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/860334.pem
	I0520 13:30:04.637844  906496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/860334.pem /etc/ssl/certs/51391683.0"
	I0520 13:30:04.648211  906496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8603342.pem && ln -fs /usr/share/ca-certificates/8603342.pem /etc/ssl/certs/8603342.pem"
	I0520 13:30:04.659341  906496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8603342.pem
	I0520 13:30:04.663719  906496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 12:33 /usr/share/ca-certificates/8603342.pem
	I0520 13:30:04.663772  906496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8603342.pem
	I0520 13:30:04.669611  906496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8603342.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 13:30:04.681121  906496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 13:30:04.692515  906496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:04.696911  906496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:52 /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:04.696952  906496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 13:30:04.703166  906496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 13:30:04.713614  906496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 13:30:04.718294  906496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 13:30:04.724214  906496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 13:30:04.730066  906496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 13:30:04.736492  906496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 13:30:04.742547  906496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 13:30:04.749011  906496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 13:30:04.754757  906496 kubeadm.go:391] StartCluster: {Name:pause-587544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:pause-587544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:30:04.754915  906496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 13:30:04.754970  906496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 13:30:04.805109  906496 cri.go:89] found id: "fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7"
	I0520 13:30:04.805133  906496 cri.go:89] found id: "8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83"
	I0520 13:30:04.805137  906496 cri.go:89] found id: "dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015"
	I0520 13:30:04.805140  906496 cri.go:89] found id: "00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8"
	I0520 13:30:04.805143  906496 cri.go:89] found id: "2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99"
	I0520 13:30:04.805146  906496 cri.go:89] found id: "04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7"
	I0520 13:30:04.805148  906496 cri.go:89] found id: ""
	I0520 13:30:04.805191  906496 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-587544 -n pause-587544
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-587544 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-587544 logs -n 25: (1.378832277s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-456265             | minikube                  | jenkins | v1.26.0 | 20 May 24 13:26 UTC | 20 May 24 13:27 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-782572 sudo           | NoKubernetes-782572       | jenkins | v1.33.1 | 20 May 24 13:26 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-782572                | NoKubernetes-782572       | jenkins | v1.33.1 | 20 May 24 13:26 UTC | 20 May 24 13:26 UTC |
	| start   | -p cert-expiration-866786             | cert-expiration-866786    | jenkins | v1.33.1 | 20 May 24 13:26 UTC | 20 May 24 13:27 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-823294             | running-upgrade-823294    | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:27 UTC |
	| start   | -p force-systemd-flag-783351          | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-456265 stop           | minikube                  | jenkins | v1.26.0 | 20 May 24 13:27 UTC | 20 May 24 13:27 UTC |
	| start   | -p stopped-upgrade-456265             | stopped-upgrade-456265    | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-783351 ssh cat     | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-783351          | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p cert-options-043975                | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:29 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-456265             | stopped-upgrade-456265    | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p pause-587544 --memory=2048         | pause-587544              | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:29 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-043975 ssh               | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-043975 -- sudo        | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-043975                | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p auto-301514 --memory=3072          | auto-301514               | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:30 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:29 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:29 UTC | 20 May 24 13:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-587544                       | pause-587544              | jenkins | v1.33.1 | 20 May 24 13:29 UTC | 20 May 24 13:30 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-301514 pgrep -a               | auto-301514               | jenkins | v1.33.1 | 20 May 24 13:30 UTC | 20 May 24 13:30 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:30 UTC | 20 May 24 13:30 UTC |
	| start   | -p kindnet-301514                     | kindnet-301514            | jenkins | v1.33.1 | 20 May 24 13:30 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:30:23
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:30:23.853835  906920 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:30:23.854073  906920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:30:23.854083  906920 out.go:304] Setting ErrFile to fd 2...
	I0520 13:30:23.854088  906920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:30:23.854258  906920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:30:23.854965  906920 out.go:298] Setting JSON to false
	I0520 13:30:23.856072  906920 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11572,"bootTime":1716200252,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:30:23.856140  906920 start.go:139] virtualization: kvm guest
	I0520 13:30:23.858467  906920 out.go:177] * [kindnet-301514] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:30:23.859861  906920 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 13:30:23.859824  906920 notify.go:220] Checking for updates...
	I0520 13:30:23.861446  906920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:30:23.862938  906920 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:30:23.864346  906920 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:30:23.865646  906920 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:30:23.866960  906920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:30:23.868784  906920 config.go:182] Loaded profile config "auto-301514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:23.868917  906920 config.go:182] Loaded profile config "cert-expiration-866786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:23.869101  906920 config.go:182] Loaded profile config "pause-587544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:23.869237  906920 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:30:23.910388  906920 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 13:30:23.911670  906920 start.go:297] selected driver: kvm2
	I0520 13:30:23.911708  906920 start.go:901] validating driver "kvm2" against <nil>
	I0520 13:30:23.911722  906920 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:30:23.912527  906920 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:30:23.912605  906920 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:30:23.928940  906920 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:30:23.929017  906920 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 13:30:23.929388  906920 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:30:23.929482  906920 cni.go:84] Creating CNI manager for "kindnet"
	I0520 13:30:23.929494  906920 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 13:30:23.929572  906920 start.go:340] cluster config:
	{Name:kindnet-301514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-301514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:30:23.929711  906920 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:30:23.931609  906920 out.go:177] * Starting "kindnet-301514" primary control-plane node in "kindnet-301514" cluster
	I0520 13:30:23.932794  906920 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:30:23.932847  906920 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:30:23.932862  906920 cache.go:56] Caching tarball of preloaded images
	I0520 13:30:23.932987  906920 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:30:23.933002  906920 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:30:23.933135  906920 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kindnet-301514/config.json ...
	I0520 13:30:23.933165  906920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kindnet-301514/config.json: {Name:mk24fe73007643ab14b321023b3e1d358d5d9e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:23.933337  906920 start.go:360] acquireMachinesLock for kindnet-301514: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:30:23.933380  906920 start.go:364] duration metric: took 23.25µs to acquireMachinesLock for "kindnet-301514"
	I0520 13:30:23.933403  906920 start.go:93] Provisioning new machine with config: &{Name:kindnet-301514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:kindnet-301514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:30:23.933500  906920 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 13:30:24.221991  906496 pod_ready.go:102] pod "etcd-pause-587544" in "kube-system" namespace has status "Ready":"False"
	I0520 13:30:25.222346  906496 pod_ready.go:92] pod "etcd-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.222373  906496 pod_ready.go:81] duration metric: took 12.007059164s for pod "etcd-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.222385  906496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.227301  906496 pod_ready.go:92] pod "kube-apiserver-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.227324  906496 pod_ready.go:81] duration metric: took 4.930979ms for pod "kube-apiserver-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.227335  906496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.231745  906496 pod_ready.go:92] pod "kube-controller-manager-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.231765  906496 pod_ready.go:81] duration metric: took 4.421361ms for pod "kube-controller-manager-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.231776  906496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s7v7z" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.236256  906496 pod_ready.go:92] pod "kube-proxy-s7v7z" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.236278  906496 pod_ready.go:81] duration metric: took 4.495708ms for pod "kube-proxy-s7v7z" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.236286  906496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.241256  906496 pod_ready.go:92] pod "kube-scheduler-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.241275  906496 pod_ready.go:81] duration metric: took 4.983313ms for pod "kube-scheduler-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.241281  906496 pod_ready.go:38] duration metric: took 13.537765962s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:30:25.241298  906496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 13:30:25.257190  906496 ops.go:34] apiserver oom_adj: -16
	I0520 13:30:25.257215  906496 kubeadm.go:591] duration metric: took 20.400996763s to restartPrimaryControlPlane
	I0520 13:30:25.257226  906496 kubeadm.go:393] duration metric: took 20.502478549s to StartCluster
	I0520 13:30:25.257247  906496 settings.go:142] acquiring lock: {Name:mk4281d9011919f2beed93cad1a6e2e67e70984f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:25.257339  906496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:30:25.258767  906496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/kubeconfig: {Name:mk53b7329389b23289bbec52de9b56d2ade0e6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:25.259028  906496 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:30:25.260893  906496 out.go:177] * Verifying Kubernetes components...
	I0520 13:30:25.259107  906496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 13:30:25.259296  906496 config.go:182] Loaded profile config "pause-587544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:25.262330  906496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:30:25.263691  906496 out.go:177] * Enabled addons: 
	I0520 13:30:25.264890  906496 addons.go:505] duration metric: took 5.780868ms for enable addons: enabled=[]
	I0520 13:30:25.438569  906496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:30:25.460424  906496 node_ready.go:35] waiting up to 6m0s for node "pause-587544" to be "Ready" ...
	I0520 13:30:25.464154  906496 node_ready.go:49] node "pause-587544" has status "Ready":"True"
	I0520 13:30:25.464181  906496 node_ready.go:38] duration metric: took 3.723902ms for node "pause-587544" to be "Ready" ...
	I0520 13:30:25.464192  906496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:30:25.624135  906496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4pv8h" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.020290  906496 pod_ready.go:92] pod "coredns-7db6d8ff4d-4pv8h" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:26.020327  906496 pod_ready.go:81] duration metric: took 396.143665ms for pod "coredns-7db6d8ff4d-4pv8h" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.020342  906496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.420282  906496 pod_ready.go:92] pod "etcd-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:26.420314  906496 pod_ready.go:81] duration metric: took 399.963012ms for pod "etcd-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.420329  906496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.820397  906496 pod_ready.go:92] pod "kube-apiserver-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:26.820423  906496 pod_ready.go:81] duration metric: took 400.086327ms for pod "kube-apiserver-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.820433  906496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:23.935251  906920 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 13:30:23.935427  906920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:23.935475  906920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:23.951537  906920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0520 13:30:23.952052  906920 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:23.952679  906920 main.go:141] libmachine: Using API Version  1
	I0520 13:30:23.952705  906920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:23.953073  906920 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:23.953270  906920 main.go:141] libmachine: (kindnet-301514) Calling .GetMachineName
	I0520 13:30:23.953525  906920 main.go:141] libmachine: (kindnet-301514) Calling .DriverName
	I0520 13:30:23.953682  906920 start.go:159] libmachine.API.Create for "kindnet-301514" (driver="kvm2")
	I0520 13:30:23.953713  906920 client.go:168] LocalClient.Create starting
	I0520 13:30:23.953745  906920 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 13:30:23.953791  906920 main.go:141] libmachine: Decoding PEM data...
	I0520 13:30:23.953817  906920 main.go:141] libmachine: Parsing certificate...
	I0520 13:30:23.953893  906920 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 13:30:23.953920  906920 main.go:141] libmachine: Decoding PEM data...
	I0520 13:30:23.953936  906920 main.go:141] libmachine: Parsing certificate...
	I0520 13:30:23.953962  906920 main.go:141] libmachine: Running pre-create checks...
	I0520 13:30:23.953972  906920 main.go:141] libmachine: (kindnet-301514) Calling .PreCreateCheck
	I0520 13:30:23.954341  906920 main.go:141] libmachine: (kindnet-301514) Calling .GetConfigRaw
	I0520 13:30:23.954782  906920 main.go:141] libmachine: Creating machine...
	I0520 13:30:23.954798  906920 main.go:141] libmachine: (kindnet-301514) Calling .Create
	I0520 13:30:23.954959  906920 main.go:141] libmachine: (kindnet-301514) Creating KVM machine...
	I0520 13:30:23.956419  906920 main.go:141] libmachine: (kindnet-301514) DBG | found existing default KVM network
	I0520 13:30:23.957681  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:23.957512  906943 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:94:d3} reservation:<nil>}
	I0520 13:30:23.958900  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:23.958789  906943 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a8750}
	I0520 13:30:23.958923  906920 main.go:141] libmachine: (kindnet-301514) DBG | created network xml: 
	I0520 13:30:23.958933  906920 main.go:141] libmachine: (kindnet-301514) DBG | <network>
	I0520 13:30:23.958941  906920 main.go:141] libmachine: (kindnet-301514) DBG |   <name>mk-kindnet-301514</name>
	I0520 13:30:23.958956  906920 main.go:141] libmachine: (kindnet-301514) DBG |   <dns enable='no'/>
	I0520 13:30:23.958964  906920 main.go:141] libmachine: (kindnet-301514) DBG |   
	I0520 13:30:23.958973  906920 main.go:141] libmachine: (kindnet-301514) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0520 13:30:23.958980  906920 main.go:141] libmachine: (kindnet-301514) DBG |     <dhcp>
	I0520 13:30:23.958990  906920 main.go:141] libmachine: (kindnet-301514) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0520 13:30:23.959002  906920 main.go:141] libmachine: (kindnet-301514) DBG |     </dhcp>
	I0520 13:30:23.959019  906920 main.go:141] libmachine: (kindnet-301514) DBG |   </ip>
	I0520 13:30:23.959030  906920 main.go:141] libmachine: (kindnet-301514) DBG |   
	I0520 13:30:23.959041  906920 main.go:141] libmachine: (kindnet-301514) DBG | </network>
	I0520 13:30:23.959051  906920 main.go:141] libmachine: (kindnet-301514) DBG | 
	I0520 13:30:23.963788  906920 main.go:141] libmachine: (kindnet-301514) DBG | trying to create private KVM network mk-kindnet-301514 192.168.50.0/24...
	I0520 13:30:24.040929  906920 main.go:141] libmachine: (kindnet-301514) DBG | private KVM network mk-kindnet-301514 192.168.50.0/24 created
	I0520 13:30:24.040975  906920 main.go:141] libmachine: (kindnet-301514) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514 ...
	I0520 13:30:24.040996  906920 main.go:141] libmachine: (kindnet-301514) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:30:24.041023  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:24.040942  906943 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:30:24.041134  906920 main.go:141] libmachine: (kindnet-301514) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:30:24.309468  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:24.309343  906943 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514/id_rsa...
	I0520 13:30:24.478870  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:24.478706  906943 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514/kindnet-301514.rawdisk...
	I0520 13:30:24.478903  906920 main.go:141] libmachine: (kindnet-301514) DBG | Writing magic tar header
	I0520 13:30:24.478916  906920 main.go:141] libmachine: (kindnet-301514) DBG | Writing SSH key tar header
	I0520 13:30:24.478932  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:24.478827  906943 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514 ...
	I0520 13:30:24.479012  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514
	I0520 13:30:24.479042  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 13:30:24.479057  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514 (perms=drwx------)
	I0520 13:30:24.479071  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:30:24.479081  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 13:30:24.479100  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 13:30:24.479116  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:30:24.479126  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:30:24.479139  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 13:30:24.479148  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:30:24.479157  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:30:24.479163  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home
	I0520 13:30:24.479171  906920 main.go:141] libmachine: (kindnet-301514) DBG | Skipping /home - not owner
	I0520 13:30:24.479181  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:30:24.479187  906920 main.go:141] libmachine: (kindnet-301514) Creating domain...
	I0520 13:30:24.480211  906920 main.go:141] libmachine: (kindnet-301514) define libvirt domain using xml: 
	I0520 13:30:24.480233  906920 main.go:141] libmachine: (kindnet-301514) <domain type='kvm'>
	I0520 13:30:24.480243  906920 main.go:141] libmachine: (kindnet-301514)   <name>kindnet-301514</name>
	I0520 13:30:24.480274  906920 main.go:141] libmachine: (kindnet-301514)   <memory unit='MiB'>3072</memory>
	I0520 13:30:24.480289  906920 main.go:141] libmachine: (kindnet-301514)   <vcpu>2</vcpu>
	I0520 13:30:24.480296  906920 main.go:141] libmachine: (kindnet-301514)   <features>
	I0520 13:30:24.480307  906920 main.go:141] libmachine: (kindnet-301514)     <acpi/>
	I0520 13:30:24.480315  906920 main.go:141] libmachine: (kindnet-301514)     <apic/>
	I0520 13:30:24.480390  906920 main.go:141] libmachine: (kindnet-301514)     <pae/>
	I0520 13:30:24.480419  906920 main.go:141] libmachine: (kindnet-301514)     
	I0520 13:30:24.480430  906920 main.go:141] libmachine: (kindnet-301514)   </features>
	I0520 13:30:24.480451  906920 main.go:141] libmachine: (kindnet-301514)   <cpu mode='host-passthrough'>
	I0520 13:30:24.480463  906920 main.go:141] libmachine: (kindnet-301514)   
	I0520 13:30:24.480481  906920 main.go:141] libmachine: (kindnet-301514)   </cpu>
	I0520 13:30:24.480492  906920 main.go:141] libmachine: (kindnet-301514)   <os>
	I0520 13:30:24.480500  906920 main.go:141] libmachine: (kindnet-301514)     <type>hvm</type>
	I0520 13:30:24.480511  906920 main.go:141] libmachine: (kindnet-301514)     <boot dev='cdrom'/>
	I0520 13:30:24.480518  906920 main.go:141] libmachine: (kindnet-301514)     <boot dev='hd'/>
	I0520 13:30:24.480542  906920 main.go:141] libmachine: (kindnet-301514)     <bootmenu enable='no'/>
	I0520 13:30:24.480565  906920 main.go:141] libmachine: (kindnet-301514)   </os>
	I0520 13:30:24.480577  906920 main.go:141] libmachine: (kindnet-301514)   <devices>
	I0520 13:30:24.480591  906920 main.go:141] libmachine: (kindnet-301514)     <disk type='file' device='cdrom'>
	I0520 13:30:24.480613  906920 main.go:141] libmachine: (kindnet-301514)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514/boot2docker.iso'/>
	I0520 13:30:24.480633  906920 main.go:141] libmachine: (kindnet-301514)       <target dev='hdc' bus='scsi'/>
	I0520 13:30:24.480645  906920 main.go:141] libmachine: (kindnet-301514)       <readonly/>
	I0520 13:30:24.480655  906920 main.go:141] libmachine: (kindnet-301514)     </disk>
	I0520 13:30:24.480665  906920 main.go:141] libmachine: (kindnet-301514)     <disk type='file' device='disk'>
	I0520 13:30:24.480682  906920 main.go:141] libmachine: (kindnet-301514)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:30:24.480699  906920 main.go:141] libmachine: (kindnet-301514)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514/kindnet-301514.rawdisk'/>
	I0520 13:30:24.480714  906920 main.go:141] libmachine: (kindnet-301514)       <target dev='hda' bus='virtio'/>
	I0520 13:30:24.480723  906920 main.go:141] libmachine: (kindnet-301514)     </disk>
	I0520 13:30:24.480730  906920 main.go:141] libmachine: (kindnet-301514)     <interface type='network'>
	I0520 13:30:24.480742  906920 main.go:141] libmachine: (kindnet-301514)       <source network='mk-kindnet-301514'/>
	I0520 13:30:24.480749  906920 main.go:141] libmachine: (kindnet-301514)       <model type='virtio'/>
	I0520 13:30:24.480761  906920 main.go:141] libmachine: (kindnet-301514)     </interface>
	I0520 13:30:24.480771  906920 main.go:141] libmachine: (kindnet-301514)     <interface type='network'>
	I0520 13:30:24.480782  906920 main.go:141] libmachine: (kindnet-301514)       <source network='default'/>
	I0520 13:30:24.480796  906920 main.go:141] libmachine: (kindnet-301514)       <model type='virtio'/>
	I0520 13:30:24.480808  906920 main.go:141] libmachine: (kindnet-301514)     </interface>
	I0520 13:30:24.480815  906920 main.go:141] libmachine: (kindnet-301514)     <serial type='pty'>
	I0520 13:30:24.480823  906920 main.go:141] libmachine: (kindnet-301514)       <target port='0'/>
	I0520 13:30:24.480830  906920 main.go:141] libmachine: (kindnet-301514)     </serial>
	I0520 13:30:24.480839  906920 main.go:141] libmachine: (kindnet-301514)     <console type='pty'>
	I0520 13:30:24.480849  906920 main.go:141] libmachine: (kindnet-301514)       <target type='serial' port='0'/>
	I0520 13:30:24.480859  906920 main.go:141] libmachine: (kindnet-301514)     </console>
	I0520 13:30:24.480874  906920 main.go:141] libmachine: (kindnet-301514)     <rng model='virtio'>
	I0520 13:30:24.480886  906920 main.go:141] libmachine: (kindnet-301514)       <backend model='random'>/dev/random</backend>
	I0520 13:30:24.480894  906920 main.go:141] libmachine: (kindnet-301514)     </rng>
	I0520 13:30:24.480905  906920 main.go:141] libmachine: (kindnet-301514)     
	I0520 13:30:24.480914  906920 main.go:141] libmachine: (kindnet-301514)     
	I0520 13:30:24.480923  906920 main.go:141] libmachine: (kindnet-301514)   </devices>
	I0520 13:30:24.480932  906920 main.go:141] libmachine: (kindnet-301514) </domain>
	I0520 13:30:24.480943  906920 main.go:141] libmachine: (kindnet-301514) 
	I0520 13:30:24.485189  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:fb:47:1c in network default
	I0520 13:30:24.485776  906920 main.go:141] libmachine: (kindnet-301514) Ensuring networks are active...
	I0520 13:30:24.485808  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:24.486595  906920 main.go:141] libmachine: (kindnet-301514) Ensuring network default is active
	I0520 13:30:24.486926  906920 main.go:141] libmachine: (kindnet-301514) Ensuring network mk-kindnet-301514 is active
	I0520 13:30:24.487596  906920 main.go:141] libmachine: (kindnet-301514) Getting domain xml...
	I0520 13:30:24.488327  906920 main.go:141] libmachine: (kindnet-301514) Creating domain...
	I0520 13:30:25.765419  906920 main.go:141] libmachine: (kindnet-301514) Waiting to get IP...
	I0520 13:30:25.766469  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:25.767059  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:25.767086  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:25.767038  906943 retry.go:31] will retry after 299.088602ms: waiting for machine to come up
	I0520 13:30:26.068432  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:26.069030  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:26.069065  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:26.068979  906943 retry.go:31] will retry after 316.527825ms: waiting for machine to come up
	I0520 13:30:26.387669  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:26.388173  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:26.388205  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:26.388127  906943 retry.go:31] will retry after 394.159ms: waiting for machine to come up
	I0520 13:30:26.783655  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:26.784185  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:26.784213  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:26.784121  906943 retry.go:31] will retry after 467.903678ms: waiting for machine to come up
	I0520 13:30:27.253357  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:27.253851  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:27.253878  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:27.253804  906943 retry.go:31] will retry after 574.175778ms: waiting for machine to come up
	I0520 13:30:27.829129  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:27.829635  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:27.829660  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:27.829585  906943 retry.go:31] will retry after 880.232257ms: waiting for machine to come up
	I0520 13:30:28.711258  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:28.711718  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:28.711748  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:28.711655  906943 retry.go:31] will retry after 750.031656ms: waiting for machine to come up
	I0520 13:30:27.220129  906496 pod_ready.go:92] pod "kube-controller-manager-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:27.220158  906496 pod_ready.go:81] duration metric: took 399.717093ms for pod "kube-controller-manager-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:27.220170  906496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7v7z" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:27.621145  906496 pod_ready.go:92] pod "kube-proxy-s7v7z" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:27.621176  906496 pod_ready.go:81] duration metric: took 400.999642ms for pod "kube-proxy-s7v7z" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:27.621187  906496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:28.019261  906496 pod_ready.go:92] pod "kube-scheduler-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:28.019293  906496 pod_ready.go:81] duration metric: took 398.097225ms for pod "kube-scheduler-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:28.019306  906496 pod_ready.go:38] duration metric: took 2.555100881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:30:28.019330  906496 api_server.go:52] waiting for apiserver process to appear ...
	I0520 13:30:28.019394  906496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:30:28.037277  906496 api_server.go:72] duration metric: took 2.778214016s to wait for apiserver process to appear ...
	I0520 13:30:28.037308  906496 api_server.go:88] waiting for apiserver healthz status ...
	I0520 13:30:28.037331  906496 api_server.go:253] Checking apiserver healthz at https://192.168.61.6:8443/healthz ...
	I0520 13:30:28.042854  906496 api_server.go:279] https://192.168.61.6:8443/healthz returned 200:
	ok
	I0520 13:30:28.043937  906496 api_server.go:141] control plane version: v1.30.1
	I0520 13:30:28.043958  906496 api_server.go:131] duration metric: took 6.642823ms to wait for apiserver health ...
	I0520 13:30:28.043965  906496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 13:30:28.222698  906496 system_pods.go:59] 6 kube-system pods found
	I0520 13:30:28.222734  906496 system_pods.go:61] "coredns-7db6d8ff4d-4pv8h" [86595569-f17c-477e-8be6-1094a6a73be8] Running
	I0520 13:30:28.222741  906496 system_pods.go:61] "etcd-pause-587544" [7219b565-84e4-40f6-9c7a-7847da77a04a] Running
	I0520 13:30:28.222746  906496 system_pods.go:61] "kube-apiserver-pause-587544" [79e78b6d-0a7b-4f59-b5e2-772cdade9f5f] Running
	I0520 13:30:28.222751  906496 system_pods.go:61] "kube-controller-manager-pause-587544" [33f29992-dec6-4077-bc45-64bb7d1e07ec] Running
	I0520 13:30:28.222755  906496 system_pods.go:61] "kube-proxy-s7v7z" [9bb6169f-8624-4bb9-9703-a3b4007b4f24] Running
	I0520 13:30:28.222758  906496 system_pods.go:61] "kube-scheduler-pause-587544" [c9c20432-704d-4dfa-a580-8abdd5b17b5b] Running
	I0520 13:30:28.222766  906496 system_pods.go:74] duration metric: took 178.794555ms to wait for pod list to return data ...
	I0520 13:30:28.222781  906496 default_sa.go:34] waiting for default service account to be created ...
	I0520 13:30:28.420235  906496 default_sa.go:45] found service account: "default"
	I0520 13:30:28.420268  906496 default_sa.go:55] duration metric: took 197.474301ms for default service account to be created ...
	I0520 13:30:28.420279  906496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 13:30:28.623608  906496 system_pods.go:86] 6 kube-system pods found
	I0520 13:30:28.623647  906496 system_pods.go:89] "coredns-7db6d8ff4d-4pv8h" [86595569-f17c-477e-8be6-1094a6a73be8] Running
	I0520 13:30:28.623652  906496 system_pods.go:89] "etcd-pause-587544" [7219b565-84e4-40f6-9c7a-7847da77a04a] Running
	I0520 13:30:28.623661  906496 system_pods.go:89] "kube-apiserver-pause-587544" [79e78b6d-0a7b-4f59-b5e2-772cdade9f5f] Running
	I0520 13:30:28.623665  906496 system_pods.go:89] "kube-controller-manager-pause-587544" [33f29992-dec6-4077-bc45-64bb7d1e07ec] Running
	I0520 13:30:28.623669  906496 system_pods.go:89] "kube-proxy-s7v7z" [9bb6169f-8624-4bb9-9703-a3b4007b4f24] Running
	I0520 13:30:28.623673  906496 system_pods.go:89] "kube-scheduler-pause-587544" [c9c20432-704d-4dfa-a580-8abdd5b17b5b] Running
	I0520 13:30:28.623681  906496 system_pods.go:126] duration metric: took 203.394432ms to wait for k8s-apps to be running ...
	I0520 13:30:28.623691  906496 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 13:30:28.623753  906496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:30:28.639742  906496 system_svc.go:56] duration metric: took 16.038643ms WaitForService to wait for kubelet
	I0520 13:30:28.639779  906496 kubeadm.go:576] duration metric: took 3.380721942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:30:28.639805  906496 node_conditions.go:102] verifying NodePressure condition ...
	I0520 13:30:28.819642  906496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:30:28.819676  906496 node_conditions.go:123] node cpu capacity is 2
	I0520 13:30:28.819691  906496 node_conditions.go:105] duration metric: took 179.879032ms to run NodePressure ...
	I0520 13:30:28.819706  906496 start.go:240] waiting for startup goroutines ...
	I0520 13:30:28.819715  906496 start.go:245] waiting for cluster config update ...
	I0520 13:30:28.819727  906496 start.go:254] writing updated cluster config ...
	I0520 13:30:28.820104  906496 ssh_runner.go:195] Run: rm -f paused
	I0520 13:30:28.879338  906496 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 13:30:28.881261  906496 out.go:177] * Done! kubectl is now configured to use "pause-587544" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.540038231Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211829540013785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a107d62d-5901-4178-a5dc-cf7d81f454c5 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.540648154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76960d19-9fd5-4716-9edb-c1a38988a5c1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.540724618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76960d19-9fd5-4716-9edb-c1a38988a5c1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.540998112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff,PodSandboxId:b206fb941899b59a45b68c85ed3888705597b0026dd3bdafeb22fead525cbcb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211810889612753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816,PodSandboxId:cb325a1ca4b9b0306ece06399685db462f19adb30741c283702634ac2caca57b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211810572849155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449,PodSandboxId:99150cc12c0d96bacc5e3e7c97db0b479168ce23cc1ad5ea2bc3c9240e4c23ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211806712600762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annot
ations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f,PodSandboxId:0e8de9422d55f20808e09b459c8adb4adca8d99076d2c09968b857b673354bf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211806701894821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]
string{io.kubernetes.container.hash: 8491067c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f,PodSandboxId:bca6dc98347ca6e76d65ee7bc6eed00a7d086969879f63d3495beb6487f05269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211806684344619,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernet
es.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47,PodSandboxId:8eaa604737fbe43dc691932ab4b8a5a1b3fccaf8c4ca1af946b9fba8bf8eb255,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211806689661151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7,PodSandboxId:38c5b4d5f317d041ae3296f703257987becf033574dea11006a6a115a5935f55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211802698604112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8491067c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015,PodSandboxId:42fc97161c764f917e3eace5e27f6da40a818f6547c7e29d602f309f44145479,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211802528318397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io.kubernetes
.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83,PodSandboxId:ccc780018d7ca45202b4f85109fc6aabb6349032528c66ac51b3eb4565263255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211802681620872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20
0064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8,PodSandboxId:9c66483e8c8e833981d42dccf2ac3bd9474f4dafe48a7eecfb63e1d0798a704d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211802426778882,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annotations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99,PodSandboxId:72daa8789d7288c868b8caa468e0288a156854a58f3531407707eaa5c334984c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211790312160699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7,PodSandboxId:d2e003ce745c0da788e31ee3dc1b132c283cce7c3e5adc3cf962d22e7f0db94d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211789843945635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76960d19-9fd5-4716-9edb-c1a38988a5c1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.586906001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=089fe1cb-4916-4315-b579-cb7773edd898 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.586997540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=089fe1cb-4916-4315-b579-cb7773edd898 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.588128285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cc3ab44-f584-4273-921e-a691df0df603 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.588821415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211829588786001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cc3ab44-f584-4273-921e-a691df0df603 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.589364300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=044a739c-1d1e-4211-8b63-01ee8de69f10 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.589433472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=044a739c-1d1e-4211-8b63-01ee8de69f10 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.589746266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff,PodSandboxId:b206fb941899b59a45b68c85ed3888705597b0026dd3bdafeb22fead525cbcb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211810889612753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816,PodSandboxId:cb325a1ca4b9b0306ece06399685db462f19adb30741c283702634ac2caca57b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211810572849155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449,PodSandboxId:99150cc12c0d96bacc5e3e7c97db0b479168ce23cc1ad5ea2bc3c9240e4c23ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211806712600762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annot
ations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f,PodSandboxId:0e8de9422d55f20808e09b459c8adb4adca8d99076d2c09968b857b673354bf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211806701894821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]
string{io.kubernetes.container.hash: 8491067c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f,PodSandboxId:bca6dc98347ca6e76d65ee7bc6eed00a7d086969879f63d3495beb6487f05269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211806684344619,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernet
es.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47,PodSandboxId:8eaa604737fbe43dc691932ab4b8a5a1b3fccaf8c4ca1af946b9fba8bf8eb255,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211806689661151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7,PodSandboxId:38c5b4d5f317d041ae3296f703257987becf033574dea11006a6a115a5935f55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211802698604112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8491067c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015,PodSandboxId:42fc97161c764f917e3eace5e27f6da40a818f6547c7e29d602f309f44145479,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211802528318397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io.kubernetes
.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83,PodSandboxId:ccc780018d7ca45202b4f85109fc6aabb6349032528c66ac51b3eb4565263255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211802681620872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20
0064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8,PodSandboxId:9c66483e8c8e833981d42dccf2ac3bd9474f4dafe48a7eecfb63e1d0798a704d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211802426778882,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annotations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99,PodSandboxId:72daa8789d7288c868b8caa468e0288a156854a58f3531407707eaa5c334984c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211790312160699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7,PodSandboxId:d2e003ce745c0da788e31ee3dc1b132c283cce7c3e5adc3cf962d22e7f0db94d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211789843945635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=044a739c-1d1e-4211-8b63-01ee8de69f10 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.632963148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc4eeaaa-0a22-4d09-8394-e7e7c11da89d name=/runtime.v1.RuntimeService/Version
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.633060768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc4eeaaa-0a22-4d09-8394-e7e7c11da89d name=/runtime.v1.RuntimeService/Version
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.634037353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b107a51-d43d-4bbf-bbaa-b3c2e257b86a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.634500264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211829634464148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b107a51-d43d-4bbf-bbaa-b3c2e257b86a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.635266475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80e6568f-289c-43ce-b08c-32dc4a503a3e name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.635339659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80e6568f-289c-43ce-b08c-32dc4a503a3e name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.635592751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff,PodSandboxId:b206fb941899b59a45b68c85ed3888705597b0026dd3bdafeb22fead525cbcb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211810889612753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816,PodSandboxId:cb325a1ca4b9b0306ece06399685db462f19adb30741c283702634ac2caca57b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211810572849155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449,PodSandboxId:99150cc12c0d96bacc5e3e7c97db0b479168ce23cc1ad5ea2bc3c9240e4c23ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211806712600762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annot
ations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f,PodSandboxId:0e8de9422d55f20808e09b459c8adb4adca8d99076d2c09968b857b673354bf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211806701894821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]
string{io.kubernetes.container.hash: 8491067c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f,PodSandboxId:bca6dc98347ca6e76d65ee7bc6eed00a7d086969879f63d3495beb6487f05269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211806684344619,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernet
es.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47,PodSandboxId:8eaa604737fbe43dc691932ab4b8a5a1b3fccaf8c4ca1af946b9fba8bf8eb255,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211806689661151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7,PodSandboxId:38c5b4d5f317d041ae3296f703257987becf033574dea11006a6a115a5935f55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211802698604112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8491067c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015,PodSandboxId:42fc97161c764f917e3eace5e27f6da40a818f6547c7e29d602f309f44145479,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211802528318397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io.kubernetes
.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83,PodSandboxId:ccc780018d7ca45202b4f85109fc6aabb6349032528c66ac51b3eb4565263255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211802681620872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20
0064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8,PodSandboxId:9c66483e8c8e833981d42dccf2ac3bd9474f4dafe48a7eecfb63e1d0798a704d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211802426778882,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annotations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99,PodSandboxId:72daa8789d7288c868b8caa468e0288a156854a58f3531407707eaa5c334984c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211790312160699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7,PodSandboxId:d2e003ce745c0da788e31ee3dc1b132c283cce7c3e5adc3cf962d22e7f0db94d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211789843945635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80e6568f-289c-43ce-b08c-32dc4a503a3e name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.683902106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd1ef6f6-4d8a-437e-b23d-d77034cdb27e name=/runtime.v1.RuntimeService/Version
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.683976581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd1ef6f6-4d8a-437e-b23d-d77034cdb27e name=/runtime.v1.RuntimeService/Version
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.685443826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=994c2970-ac92-45a6-9986-229841763292 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.686021361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211829685994875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=994c2970-ac92-45a6-9986-229841763292 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.686767174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba536c31-1e9a-4881-bd21-b72b3e5d954e name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.686821008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba536c31-1e9a-4881-bd21-b72b3e5d954e name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:29 pause-587544 crio[2682]: time="2024-05-20 13:30:29.687089386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff,PodSandboxId:b206fb941899b59a45b68c85ed3888705597b0026dd3bdafeb22fead525cbcb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211810889612753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816,PodSandboxId:cb325a1ca4b9b0306ece06399685db462f19adb30741c283702634ac2caca57b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211810572849155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449,PodSandboxId:99150cc12c0d96bacc5e3e7c97db0b479168ce23cc1ad5ea2bc3c9240e4c23ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211806712600762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annot
ations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f,PodSandboxId:0e8de9422d55f20808e09b459c8adb4adca8d99076d2c09968b857b673354bf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211806701894821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]
string{io.kubernetes.container.hash: 8491067c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f,PodSandboxId:bca6dc98347ca6e76d65ee7bc6eed00a7d086969879f63d3495beb6487f05269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211806684344619,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernet
es.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47,PodSandboxId:8eaa604737fbe43dc691932ab4b8a5a1b3fccaf8c4ca1af946b9fba8bf8eb255,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211806689661151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7,PodSandboxId:38c5b4d5f317d041ae3296f703257987becf033574dea11006a6a115a5935f55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211802698604112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8491067c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015,PodSandboxId:42fc97161c764f917e3eace5e27f6da40a818f6547c7e29d602f309f44145479,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211802528318397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io.kubernetes
.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83,PodSandboxId:ccc780018d7ca45202b4f85109fc6aabb6349032528c66ac51b3eb4565263255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211802681620872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20
0064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8,PodSandboxId:9c66483e8c8e833981d42dccf2ac3bd9474f4dafe48a7eecfb63e1d0798a704d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211802426778882,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annotations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99,PodSandboxId:72daa8789d7288c868b8caa468e0288a156854a58f3531407707eaa5c334984c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211790312160699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7,PodSandboxId:d2e003ce745c0da788e31ee3dc1b132c283cce7c3e5adc3cf962d22e7f0db94d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211789843945635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba536c31-1e9a-4881-bd21-b72b3e5d954e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8bd8e14647d2e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   1                   b206fb941899b       coredns-7db6d8ff4d-4pv8h
	f910001e4f2db       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   19 seconds ago      Running             kube-proxy                1                   cb325a1ca4b9b       kube-proxy-s7v7z
	f3c710c9e3948       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   99150cc12c0d9       etcd-pause-587544
	cf9235585eac9       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   23 seconds ago      Running             kube-apiserver            2                   0e8de9422d55f       kube-apiserver-pause-587544
	677c880f54fce       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   23 seconds ago      Running             kube-controller-manager   2                   8eaa604737fbe       kube-controller-manager-pause-587544
	426e57513abec       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   23 seconds ago      Running             kube-scheduler            2                   bca6dc98347ca       kube-scheduler-pause-587544
	fa7d92021d0bb       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   27 seconds ago      Exited              kube-apiserver            1                   38c5b4d5f317d       kube-apiserver-pause-587544
	8e482a4ef5144       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   27 seconds ago      Exited              kube-scheduler            1                   ccc780018d7ca       kube-scheduler-pause-587544
	dd0fdc485f85f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   27 seconds ago      Exited              kube-controller-manager   1                   42fc97161c764       kube-controller-manager-pause-587544
	00c3af83ad0ee       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Exited              etcd                      1                   9c66483e8c8e8       etcd-pause-587544
	2c07e95b8560a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   39 seconds ago      Exited              coredns                   0                   72daa8789d728       coredns-7db6d8ff4d-4pv8h
	04cec37cef9fa       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   39 seconds ago      Exited              kube-proxy                0                   d2e003ce745c0       kube-proxy-s7v7z
	
	
	==> coredns [2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40900 - 2324 "HINFO IN 8476660019988762421.5394260937817126370. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013051081s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52214 - 19952 "HINFO IN 5349525857651037922.1821123622764411973. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029849038s
	
	
	==> describe nodes <==
	Name:               pause-587544
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-587544
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=pause-587544
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_29_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:29:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-587544
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:30:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:30:10 +0000   Mon, 20 May 2024 13:29:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:30:10 +0000   Mon, 20 May 2024 13:29:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:30:10 +0000   Mon, 20 May 2024 13:29:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:30:10 +0000   Mon, 20 May 2024 13:29:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.6
	  Hostname:    pause-587544
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b66f57f737c549779dcab985441bb9bd
	  System UUID:                b66f57f7-37c5-4977-9dca-b985441bb9bd
	  Boot ID:                    31a89234-52e9-4426-8c92-da0a9007a676
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4pv8h                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     40s
	  kube-system                 etcd-pause-587544                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         54s
	  kube-system                 kube-apiserver-pause-587544             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-controller-manager-pause-587544    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-proxy-s7v7z                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kube-scheduler-pause-587544             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     54s                kubelet          Node pause-587544 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  54s                kubelet          Node pause-587544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s                kubelet          Node pause-587544 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeReady                52s                kubelet          Node pause-587544 status is now: NodeReady
	  Normal  RegisteredNode           41s                node-controller  Node pause-587544 event: Registered Node pause-587544 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-587544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-587544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-587544 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-587544 event: Registered Node pause-587544 in Controller
	
	
	==> dmesg <==
	[ +13.101316] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.057470] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059446] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.177312] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.138999] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.319888] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.648002] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.065554] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.525015] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.712493] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.228878] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.614213] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[ +13.396682] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +0.106445] kauditd_printk_skb: 15 callbacks suppressed
	[May20 13:30] systemd-fstab-generator[2143]: Ignoring "noauto" option for root device
	[  +0.126634] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.086524] systemd-fstab-generator[2155]: Ignoring "noauto" option for root device
	[  +0.241855] systemd-fstab-generator[2169]: Ignoring "noauto" option for root device
	[  +0.150873] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.880829] systemd-fstab-generator[2456]: Ignoring "noauto" option for root device
	[  +1.125651] systemd-fstab-generator[2777]: Ignoring "noauto" option for root device
	[  +2.169665] systemd-fstab-generator[3080]: Ignoring "noauto" option for root device
	[  +0.358619] kauditd_printk_skb: 239 callbacks suppressed
	[ +16.151857] kauditd_printk_skb: 37 callbacks suppressed
	[  +2.793298] systemd-fstab-generator[3620]: Ignoring "noauto" option for root device
	
	
	==> etcd [00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8] <==
	{"level":"warn","ts":"2024-05-20T13:30:02.962724Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-05-20T13:30:02.96289Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.61.6:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.61.6:2380","--initial-cluster=pause-587544=https://192.168.61.6:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.61.6:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.61.6:2380","--name=pause-587544","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file
=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-05-20T13:30:02.963041Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-05-20T13:30:02.963095Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-05-20T13:30:02.963123Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.61.6:2380"]}
	{"level":"info","ts":"2024-05-20T13:30:02.963172Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:30:02.96406Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.6:2379"]}
	{"level":"info","ts":"2024-05-20T13:30:02.964372Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-587544","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.61.6:2380"],"listen-peer-urls":["https://192.168.61.6:2380"],"advertise-client-urls":["https://192.168.61.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-to
ken":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-05-20T13:30:02.972683Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"8.087724ms"}
	
	
	==> etcd [f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449] <==
	{"level":"info","ts":"2024-05-20T13:30:07.059695Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:07.059793Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:07.060254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 switched to configuration voters=(14345459793014229238)"}
	{"level":"info","ts":"2024-05-20T13:30:07.060547Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"75789e35a2c78ec","local-member-id":"c7154fad1e4308f6","added-peer-id":"c7154fad1e4308f6","added-peer-peer-urls":["https://192.168.61.6:2380"]}
	{"level":"info","ts":"2024-05-20T13:30:07.061357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"75789e35a2c78ec","local-member-id":"c7154fad1e4308f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:30:07.061728Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:30:07.073581Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:30:07.076664Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.6:2380"}
	{"level":"info","ts":"2024-05-20T13:30:07.079266Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.6:2380"}
	{"level":"info","ts":"2024-05-20T13:30:07.079481Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c7154fad1e4308f6","initial-advertise-peer-urls":["https://192.168.61.6:2380"],"listen-peer-urls":["https://192.168.61.6:2380"],"advertise-client-urls":["https://192.168.61.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T13:30:07.08043Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T13:30:08.621441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:08.62148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:08.621513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 received MsgPreVoteResp from c7154fad1e4308f6 at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:08.621526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:08.621532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 received MsgVoteResp from c7154fad1e4308f6 at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:08.62154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:08.621581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c7154fad1e4308f6 elected leader c7154fad1e4308f6 at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:08.627639Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c7154fad1e4308f6","local-member-attributes":"{Name:pause-587544 ClientURLs:[https://192.168.61.6:2379]}","request-path":"/0/members/c7154fad1e4308f6/attributes","cluster-id":"75789e35a2c78ec","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T13:30:08.627687Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:30:08.627909Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:30:08.627962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T13:30:08.628Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:30:08.629849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.6:2379"}
	{"level":"info","ts":"2024-05-20T13:30:08.629848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:30:30 up 1 min,  0 users,  load average: 1.02, 0.36, 0.13
	Linux pause-587544 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f] <==
	I0520 13:30:10.057055       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 13:30:10.070432       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 13:30:10.070611       1 aggregator.go:165] initial CRD sync complete...
	I0520 13:30:10.070643       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 13:30:10.070671       1 cache.go:32] Waiting for caches to sync for autoregister controller
	E0520 13:30:10.110166       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 13:30:10.140700       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 13:30:10.143014       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:30:10.143032       1 policy_source.go:224] refreshing policies
	I0520 13:30:10.147588       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:30:10.147680       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 13:30:10.149003       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:30:10.152647       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 13:30:10.155983       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 13:30:10.156029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 13:30:10.165409       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 13:30:10.171627       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:30:10.963147       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 13:30:11.527631       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 13:30:11.543897       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 13:30:11.591963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 13:30:11.627327       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 13:30:11.636029       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 13:30:22.507735       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 13:30:22.619393       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7] <==
	
	
	==> kube-controller-manager [677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47] <==
	I0520 13:30:22.482560       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0520 13:30:22.484127       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0520 13:30:22.484347       1 shared_informer.go:320] Caches are synced for disruption
	I0520 13:30:22.485489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0520 13:30:22.486783       1 shared_informer.go:320] Caches are synced for ephemeral
	I0520 13:30:22.486983       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0520 13:30:22.487885       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0520 13:30:22.490116       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0520 13:30:22.492379       1 shared_informer.go:320] Caches are synced for PV protection
	I0520 13:30:22.499498       1 shared_informer.go:320] Caches are synced for job
	I0520 13:30:22.501263       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0520 13:30:22.501432       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.723µs"
	I0520 13:30:22.504400       1 shared_informer.go:320] Caches are synced for persistent volume
	I0520 13:30:22.568423       1 shared_informer.go:320] Caches are synced for taint
	I0520 13:30:22.568683       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0520 13:30:22.568798       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-587544"
	I0520 13:30:22.568907       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0520 13:30:22.577896       1 shared_informer.go:320] Caches are synced for daemon sets
	I0520 13:30:22.606899       1 shared_informer.go:320] Caches are synced for endpoint
	I0520 13:30:22.618363       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 13:30:22.639139       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 13:30:22.670181       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 13:30:23.068879       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 13:30:23.069016       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 13:30:23.102680       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015] <==
	
	
	==> kube-proxy [04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7] <==
	I0520 13:29:50.095516       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:29:50.116370       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.6"]
	I0520 13:29:50.198954       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:29:50.198993       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:29:50.199009       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:29:50.203604       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:29:50.203758       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:29:50.203771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:29:50.205153       1 config.go:192] "Starting service config controller"
	I0520 13:29:50.205170       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:29:50.205467       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:29:50.205477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:29:50.205911       1 config.go:319] "Starting node config controller"
	I0520 13:29:50.205926       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:29:50.306425       1 shared_informer.go:320] Caches are synced for node config
	I0520 13:29:50.306457       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:29:50.306457       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816] <==
	I0520 13:30:10.756096       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:30:10.772448       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.6"]
	I0520 13:30:10.851359       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:30:10.851396       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:30:10.851429       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:30:10.857583       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:30:10.857800       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:30:10.857831       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:30:10.859085       1 config.go:192] "Starting service config controller"
	I0520 13:30:10.859125       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:30:10.859172       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:30:10.859177       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:30:10.861288       1 config.go:319] "Starting node config controller"
	I0520 13:30:10.861315       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:30:10.959893       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:30:10.960035       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:30:10.961671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f] <==
	I0520 13:30:07.474297       1 serving.go:380] Generated self-signed cert in-memory
	W0520 13:30:10.039369       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 13:30:10.039528       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 13:30:10.039559       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 13:30:10.039646       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 13:30:10.083518       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 13:30:10.083616       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:30:10.091657       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 13:30:10.091767       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 13:30:10.096127       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 13:30:10.091789       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 13:30:10.199427       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83] <==
	
	
	==> kubelet <==
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.555341    3087 kubelet_node_status.go:73] "Attempting to register node" node="pause-587544"
	May 20 13:30:06 pause-587544 kubelet[3087]: E0520 13:30:06.556396    3087 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.6:8443: connect: connection refused" node="pause-587544"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.664307    3087 scope.go:117] "RemoveContainer" containerID="8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.666416    3087 scope.go:117] "RemoveContainer" containerID="00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.666952    3087 scope.go:117] "RemoveContainer" containerID="fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.668053    3087 scope.go:117] "RemoveContainer" containerID="dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015"
	May 20 13:30:06 pause-587544 kubelet[3087]: E0520 13:30:06.858556    3087 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-587544?timeout=10s\": dial tcp 192.168.61.6:8443: connect: connection refused" interval="800ms"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.958340    3087 kubelet_node_status.go:73] "Attempting to register node" node="pause-587544"
	May 20 13:30:06 pause-587544 kubelet[3087]: E0520 13:30:06.959264    3087 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.6:8443: connect: connection refused" node="pause-587544"
	May 20 13:30:07 pause-587544 kubelet[3087]: W0520 13:30:07.083663    3087 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.6:8443: connect: connection refused
	May 20 13:30:07 pause-587544 kubelet[3087]: E0520 13:30:07.083778    3087 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.6:8443: connect: connection refused
	May 20 13:30:07 pause-587544 kubelet[3087]: I0520 13:30:07.761929    3087 kubelet_node_status.go:73] "Attempting to register node" node="pause-587544"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.206537    3087 kubelet_node_status.go:112] "Node was previously registered" node="pause-587544"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.206913    3087 kubelet_node_status.go:76] "Successfully registered node" node="pause-587544"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.208292    3087 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.209426    3087 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.240954    3087 apiserver.go:52] "Watching apiserver"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.245471    3087 topology_manager.go:215] "Topology Admit Handler" podUID="86595569-f17c-477e-8be6-1094a6a73be8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4pv8h"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.245636    3087 topology_manager.go:215] "Topology Admit Handler" podUID="9bb6169f-8624-4bb9-9703-a3b4007b4f24" podNamespace="kube-system" podName="kube-proxy-s7v7z"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.254590    3087 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.319863    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bb6169f-8624-4bb9-9703-a3b4007b4f24-xtables-lock\") pod \"kube-proxy-s7v7z\" (UID: \"9bb6169f-8624-4bb9-9703-a3b4007b4f24\") " pod="kube-system/kube-proxy-s7v7z"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.320036    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bb6169f-8624-4bb9-9703-a3b4007b4f24-lib-modules\") pod \"kube-proxy-s7v7z\" (UID: \"9bb6169f-8624-4bb9-9703-a3b4007b4f24\") " pod="kube-system/kube-proxy-s7v7z"
	May 20 13:30:10 pause-587544 kubelet[3087]: E0520 13:30:10.415624    3087 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-587544\" already exists" pod="kube-system/kube-apiserver-pause-587544"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.547315    3087 scope.go:117] "RemoveContainer" containerID="04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7"
	May 20 13:30:12 pause-587544 kubelet[3087]: I0520 13:30:12.825556    3087 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-587544 -n pause-587544
helpers_test.go:261: (dbg) Run:  kubectl --context pause-587544 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-587544 -n pause-587544
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-587544 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-587544 logs -n 25: (1.392834573s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-456265             | minikube                  | jenkins | v1.26.0 | 20 May 24 13:26 UTC | 20 May 24 13:27 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-782572 sudo           | NoKubernetes-782572       | jenkins | v1.33.1 | 20 May 24 13:26 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-782572                | NoKubernetes-782572       | jenkins | v1.33.1 | 20 May 24 13:26 UTC | 20 May 24 13:26 UTC |
	| start   | -p cert-expiration-866786             | cert-expiration-866786    | jenkins | v1.33.1 | 20 May 24 13:26 UTC | 20 May 24 13:27 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-823294             | running-upgrade-823294    | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:27 UTC |
	| start   | -p force-systemd-flag-783351          | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-456265 stop           | minikube                  | jenkins | v1.26.0 | 20 May 24 13:27 UTC | 20 May 24 13:27 UTC |
	| start   | -p stopped-upgrade-456265             | stopped-upgrade-456265    | jenkins | v1.33.1 | 20 May 24 13:27 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-783351 ssh cat     | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-783351          | force-systemd-flag-783351 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p cert-options-043975                | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:29 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-456265             | stopped-upgrade-456265    | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p pause-587544 --memory=2048         | pause-587544              | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:29 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-043975 ssh               | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-043975 -- sudo        | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-043975                | cert-options-043975       | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:28 UTC |
	| start   | -p auto-301514 --memory=3072          | auto-301514               | jenkins | v1.33.1 | 20 May 24 13:28 UTC | 20 May 24 13:30 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:29 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:29 UTC | 20 May 24 13:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-587544                       | pause-587544              | jenkins | v1.33.1 | 20 May 24 13:29 UTC | 20 May 24 13:30 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-301514 pgrep -a               | auto-301514               | jenkins | v1.33.1 | 20 May 24 13:30 UTC | 20 May 24 13:30 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-785943          | kubernetes-upgrade-785943 | jenkins | v1.33.1 | 20 May 24 13:30 UTC | 20 May 24 13:30 UTC |
	| start   | -p kindnet-301514                     | kindnet-301514            | jenkins | v1.33.1 | 20 May 24 13:30 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 13:30:23
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 13:30:23.853835  906920 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:30:23.854073  906920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:30:23.854083  906920 out.go:304] Setting ErrFile to fd 2...
	I0520 13:30:23.854088  906920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:30:23.854258  906920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:30:23.854965  906920 out.go:298] Setting JSON to false
	I0520 13:30:23.856072  906920 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11572,"bootTime":1716200252,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 13:30:23.856140  906920 start.go:139] virtualization: kvm guest
	I0520 13:30:23.858467  906920 out.go:177] * [kindnet-301514] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 13:30:23.859861  906920 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 13:30:23.859824  906920 notify.go:220] Checking for updates...
	I0520 13:30:23.861446  906920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 13:30:23.862938  906920 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:30:23.864346  906920 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:30:23.865646  906920 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 13:30:23.866960  906920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 13:30:23.868784  906920 config.go:182] Loaded profile config "auto-301514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:23.868917  906920 config.go:182] Loaded profile config "cert-expiration-866786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:23.869101  906920 config.go:182] Loaded profile config "pause-587544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:23.869237  906920 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 13:30:23.910388  906920 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 13:30:23.911670  906920 start.go:297] selected driver: kvm2
	I0520 13:30:23.911708  906920 start.go:901] validating driver "kvm2" against <nil>
	I0520 13:30:23.911722  906920 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 13:30:23.912527  906920 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:30:23.912605  906920 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 13:30:23.928940  906920 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 13:30:23.929017  906920 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 13:30:23.929388  906920 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:30:23.929482  906920 cni.go:84] Creating CNI manager for "kindnet"
	I0520 13:30:23.929494  906920 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 13:30:23.929572  906920 start.go:340] cluster config:
	{Name:kindnet-301514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-301514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 13:30:23.929711  906920 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 13:30:23.931609  906920 out.go:177] * Starting "kindnet-301514" primary control-plane node in "kindnet-301514" cluster
	I0520 13:30:23.932794  906920 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 13:30:23.932847  906920 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 13:30:23.932862  906920 cache.go:56] Caching tarball of preloaded images
	I0520 13:30:23.932987  906920 preload.go:173] Found /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 13:30:23.933002  906920 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 13:30:23.933135  906920 profile.go:143] Saving config to /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kindnet-301514/config.json ...
	I0520 13:30:23.933165  906920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/kindnet-301514/config.json: {Name:mk24fe73007643ab14b321023b3e1d358d5d9e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:23.933337  906920 start.go:360] acquireMachinesLock for kindnet-301514: {Name:mk91c1336326c62a2bdbc6f1c2ec12411304ca83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 13:30:23.933380  906920 start.go:364] duration metric: took 23.25µs to acquireMachinesLock for "kindnet-301514"
	I0520 13:30:23.933403  906920 start.go:93] Provisioning new machine with config: &{Name:kindnet-301514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:kindnet-301514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:30:23.933500  906920 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 13:30:24.221991  906496 pod_ready.go:102] pod "etcd-pause-587544" in "kube-system" namespace has status "Ready":"False"
	I0520 13:30:25.222346  906496 pod_ready.go:92] pod "etcd-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.222373  906496 pod_ready.go:81] duration metric: took 12.007059164s for pod "etcd-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.222385  906496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.227301  906496 pod_ready.go:92] pod "kube-apiserver-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.227324  906496 pod_ready.go:81] duration metric: took 4.930979ms for pod "kube-apiserver-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.227335  906496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.231745  906496 pod_ready.go:92] pod "kube-controller-manager-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.231765  906496 pod_ready.go:81] duration metric: took 4.421361ms for pod "kube-controller-manager-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.231776  906496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s7v7z" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.236256  906496 pod_ready.go:92] pod "kube-proxy-s7v7z" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.236278  906496 pod_ready.go:81] duration metric: took 4.495708ms for pod "kube-proxy-s7v7z" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.236286  906496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.241256  906496 pod_ready.go:92] pod "kube-scheduler-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:25.241275  906496 pod_ready.go:81] duration metric: took 4.983313ms for pod "kube-scheduler-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:25.241281  906496 pod_ready.go:38] duration metric: took 13.537765962s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:30:25.241298  906496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 13:30:25.257190  906496 ops.go:34] apiserver oom_adj: -16
	I0520 13:30:25.257215  906496 kubeadm.go:591] duration metric: took 20.400996763s to restartPrimaryControlPlane
	I0520 13:30:25.257226  906496 kubeadm.go:393] duration metric: took 20.502478549s to StartCluster
	I0520 13:30:25.257247  906496 settings.go:142] acquiring lock: {Name:mk4281d9011919f2beed93cad1a6e2e67e70984f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:25.257339  906496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 13:30:25.258767  906496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18932-852915/kubeconfig: {Name:mk53b7329389b23289bbec52de9b56d2ade0e6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 13:30:25.259028  906496 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 13:30:25.260893  906496 out.go:177] * Verifying Kubernetes components...
	I0520 13:30:25.259107  906496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 13:30:25.259296  906496 config.go:182] Loaded profile config "pause-587544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:30:25.262330  906496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 13:30:25.263691  906496 out.go:177] * Enabled addons: 
	I0520 13:30:25.264890  906496 addons.go:505] duration metric: took 5.780868ms for enable addons: enabled=[]
	I0520 13:30:25.438569  906496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 13:30:25.460424  906496 node_ready.go:35] waiting up to 6m0s for node "pause-587544" to be "Ready" ...
	I0520 13:30:25.464154  906496 node_ready.go:49] node "pause-587544" has status "Ready":"True"
	I0520 13:30:25.464181  906496 node_ready.go:38] duration metric: took 3.723902ms for node "pause-587544" to be "Ready" ...
	I0520 13:30:25.464192  906496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:30:25.624135  906496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4pv8h" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.020290  906496 pod_ready.go:92] pod "coredns-7db6d8ff4d-4pv8h" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:26.020327  906496 pod_ready.go:81] duration metric: took 396.143665ms for pod "coredns-7db6d8ff4d-4pv8h" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.020342  906496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.420282  906496 pod_ready.go:92] pod "etcd-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:26.420314  906496 pod_ready.go:81] duration metric: took 399.963012ms for pod "etcd-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.420329  906496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.820397  906496 pod_ready.go:92] pod "kube-apiserver-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:26.820423  906496 pod_ready.go:81] duration metric: took 400.086327ms for pod "kube-apiserver-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:26.820433  906496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:23.935251  906920 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 13:30:23.935427  906920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:30:23.935475  906920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:30:23.951537  906920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0520 13:30:23.952052  906920 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:30:23.952679  906920 main.go:141] libmachine: Using API Version  1
	I0520 13:30:23.952705  906920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:30:23.953073  906920 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:30:23.953270  906920 main.go:141] libmachine: (kindnet-301514) Calling .GetMachineName
	I0520 13:30:23.953525  906920 main.go:141] libmachine: (kindnet-301514) Calling .DriverName
	I0520 13:30:23.953682  906920 start.go:159] libmachine.API.Create for "kindnet-301514" (driver="kvm2")
	I0520 13:30:23.953713  906920 client.go:168] LocalClient.Create starting
	I0520 13:30:23.953745  906920 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/ca.pem
	I0520 13:30:23.953791  906920 main.go:141] libmachine: Decoding PEM data...
	I0520 13:30:23.953817  906920 main.go:141] libmachine: Parsing certificate...
	I0520 13:30:23.953893  906920 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18932-852915/.minikube/certs/cert.pem
	I0520 13:30:23.953920  906920 main.go:141] libmachine: Decoding PEM data...
	I0520 13:30:23.953936  906920 main.go:141] libmachine: Parsing certificate...
	I0520 13:30:23.953962  906920 main.go:141] libmachine: Running pre-create checks...
	I0520 13:30:23.953972  906920 main.go:141] libmachine: (kindnet-301514) Calling .PreCreateCheck
	I0520 13:30:23.954341  906920 main.go:141] libmachine: (kindnet-301514) Calling .GetConfigRaw
	I0520 13:30:23.954782  906920 main.go:141] libmachine: Creating machine...
	I0520 13:30:23.954798  906920 main.go:141] libmachine: (kindnet-301514) Calling .Create
	I0520 13:30:23.954959  906920 main.go:141] libmachine: (kindnet-301514) Creating KVM machine...
	I0520 13:30:23.956419  906920 main.go:141] libmachine: (kindnet-301514) DBG | found existing default KVM network
	I0520 13:30:23.957681  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:23.957512  906943 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:94:d3} reservation:<nil>}
	I0520 13:30:23.958900  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:23.958789  906943 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a8750}
	I0520 13:30:23.958923  906920 main.go:141] libmachine: (kindnet-301514) DBG | created network xml: 
	I0520 13:30:23.958933  906920 main.go:141] libmachine: (kindnet-301514) DBG | <network>
	I0520 13:30:23.958941  906920 main.go:141] libmachine: (kindnet-301514) DBG |   <name>mk-kindnet-301514</name>
	I0520 13:30:23.958956  906920 main.go:141] libmachine: (kindnet-301514) DBG |   <dns enable='no'/>
	I0520 13:30:23.958964  906920 main.go:141] libmachine: (kindnet-301514) DBG |   
	I0520 13:30:23.958973  906920 main.go:141] libmachine: (kindnet-301514) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0520 13:30:23.958980  906920 main.go:141] libmachine: (kindnet-301514) DBG |     <dhcp>
	I0520 13:30:23.958990  906920 main.go:141] libmachine: (kindnet-301514) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0520 13:30:23.959002  906920 main.go:141] libmachine: (kindnet-301514) DBG |     </dhcp>
	I0520 13:30:23.959019  906920 main.go:141] libmachine: (kindnet-301514) DBG |   </ip>
	I0520 13:30:23.959030  906920 main.go:141] libmachine: (kindnet-301514) DBG |   
	I0520 13:30:23.959041  906920 main.go:141] libmachine: (kindnet-301514) DBG | </network>
	I0520 13:30:23.959051  906920 main.go:141] libmachine: (kindnet-301514) DBG | 
	I0520 13:30:23.963788  906920 main.go:141] libmachine: (kindnet-301514) DBG | trying to create private KVM network mk-kindnet-301514 192.168.50.0/24...
	I0520 13:30:24.040929  906920 main.go:141] libmachine: (kindnet-301514) DBG | private KVM network mk-kindnet-301514 192.168.50.0/24 created
	I0520 13:30:24.040975  906920 main.go:141] libmachine: (kindnet-301514) Setting up store path in /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514 ...
	I0520 13:30:24.040996  906920 main.go:141] libmachine: (kindnet-301514) Building disk image from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 13:30:24.041023  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:24.040942  906943 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:30:24.041134  906920 main.go:141] libmachine: (kindnet-301514) Downloading /home/jenkins/minikube-integration/18932-852915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 13:30:24.309468  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:24.309343  906943 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514/id_rsa...
	I0520 13:30:24.478870  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:24.478706  906943 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514/kindnet-301514.rawdisk...
	I0520 13:30:24.478903  906920 main.go:141] libmachine: (kindnet-301514) DBG | Writing magic tar header
	I0520 13:30:24.478916  906920 main.go:141] libmachine: (kindnet-301514) DBG | Writing SSH key tar header
	I0520 13:30:24.478932  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:24.478827  906943 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514 ...
	I0520 13:30:24.479012  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514
	I0520 13:30:24.479042  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube/machines
	I0520 13:30:24.479057  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514 (perms=drwx------)
	I0520 13:30:24.479071  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube/machines (perms=drwxr-xr-x)
	I0520 13:30:24.479081  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915/.minikube (perms=drwxr-xr-x)
	I0520 13:30:24.479100  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration/18932-852915 (perms=drwxrwxr-x)
	I0520 13:30:24.479116  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 13:30:24.479126  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 13:30:24.479139  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18932-852915
	I0520 13:30:24.479148  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 13:30:24.479157  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home/jenkins
	I0520 13:30:24.479163  906920 main.go:141] libmachine: (kindnet-301514) DBG | Checking permissions on dir: /home
	I0520 13:30:24.479171  906920 main.go:141] libmachine: (kindnet-301514) DBG | Skipping /home - not owner
	I0520 13:30:24.479181  906920 main.go:141] libmachine: (kindnet-301514) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 13:30:24.479187  906920 main.go:141] libmachine: (kindnet-301514) Creating domain...
	I0520 13:30:24.480211  906920 main.go:141] libmachine: (kindnet-301514) define libvirt domain using xml: 
	I0520 13:30:24.480233  906920 main.go:141] libmachine: (kindnet-301514) <domain type='kvm'>
	I0520 13:30:24.480243  906920 main.go:141] libmachine: (kindnet-301514)   <name>kindnet-301514</name>
	I0520 13:30:24.480274  906920 main.go:141] libmachine: (kindnet-301514)   <memory unit='MiB'>3072</memory>
	I0520 13:30:24.480289  906920 main.go:141] libmachine: (kindnet-301514)   <vcpu>2</vcpu>
	I0520 13:30:24.480296  906920 main.go:141] libmachine: (kindnet-301514)   <features>
	I0520 13:30:24.480307  906920 main.go:141] libmachine: (kindnet-301514)     <acpi/>
	I0520 13:30:24.480315  906920 main.go:141] libmachine: (kindnet-301514)     <apic/>
	I0520 13:30:24.480390  906920 main.go:141] libmachine: (kindnet-301514)     <pae/>
	I0520 13:30:24.480419  906920 main.go:141] libmachine: (kindnet-301514)     
	I0520 13:30:24.480430  906920 main.go:141] libmachine: (kindnet-301514)   </features>
	I0520 13:30:24.480451  906920 main.go:141] libmachine: (kindnet-301514)   <cpu mode='host-passthrough'>
	I0520 13:30:24.480463  906920 main.go:141] libmachine: (kindnet-301514)   
	I0520 13:30:24.480481  906920 main.go:141] libmachine: (kindnet-301514)   </cpu>
	I0520 13:30:24.480492  906920 main.go:141] libmachine: (kindnet-301514)   <os>
	I0520 13:30:24.480500  906920 main.go:141] libmachine: (kindnet-301514)     <type>hvm</type>
	I0520 13:30:24.480511  906920 main.go:141] libmachine: (kindnet-301514)     <boot dev='cdrom'/>
	I0520 13:30:24.480518  906920 main.go:141] libmachine: (kindnet-301514)     <boot dev='hd'/>
	I0520 13:30:24.480542  906920 main.go:141] libmachine: (kindnet-301514)     <bootmenu enable='no'/>
	I0520 13:30:24.480565  906920 main.go:141] libmachine: (kindnet-301514)   </os>
	I0520 13:30:24.480577  906920 main.go:141] libmachine: (kindnet-301514)   <devices>
	I0520 13:30:24.480591  906920 main.go:141] libmachine: (kindnet-301514)     <disk type='file' device='cdrom'>
	I0520 13:30:24.480613  906920 main.go:141] libmachine: (kindnet-301514)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514/boot2docker.iso'/>
	I0520 13:30:24.480633  906920 main.go:141] libmachine: (kindnet-301514)       <target dev='hdc' bus='scsi'/>
	I0520 13:30:24.480645  906920 main.go:141] libmachine: (kindnet-301514)       <readonly/>
	I0520 13:30:24.480655  906920 main.go:141] libmachine: (kindnet-301514)     </disk>
	I0520 13:30:24.480665  906920 main.go:141] libmachine: (kindnet-301514)     <disk type='file' device='disk'>
	I0520 13:30:24.480682  906920 main.go:141] libmachine: (kindnet-301514)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 13:30:24.480699  906920 main.go:141] libmachine: (kindnet-301514)       <source file='/home/jenkins/minikube-integration/18932-852915/.minikube/machines/kindnet-301514/kindnet-301514.rawdisk'/>
	I0520 13:30:24.480714  906920 main.go:141] libmachine: (kindnet-301514)       <target dev='hda' bus='virtio'/>
	I0520 13:30:24.480723  906920 main.go:141] libmachine: (kindnet-301514)     </disk>
	I0520 13:30:24.480730  906920 main.go:141] libmachine: (kindnet-301514)     <interface type='network'>
	I0520 13:30:24.480742  906920 main.go:141] libmachine: (kindnet-301514)       <source network='mk-kindnet-301514'/>
	I0520 13:30:24.480749  906920 main.go:141] libmachine: (kindnet-301514)       <model type='virtio'/>
	I0520 13:30:24.480761  906920 main.go:141] libmachine: (kindnet-301514)     </interface>
	I0520 13:30:24.480771  906920 main.go:141] libmachine: (kindnet-301514)     <interface type='network'>
	I0520 13:30:24.480782  906920 main.go:141] libmachine: (kindnet-301514)       <source network='default'/>
	I0520 13:30:24.480796  906920 main.go:141] libmachine: (kindnet-301514)       <model type='virtio'/>
	I0520 13:30:24.480808  906920 main.go:141] libmachine: (kindnet-301514)     </interface>
	I0520 13:30:24.480815  906920 main.go:141] libmachine: (kindnet-301514)     <serial type='pty'>
	I0520 13:30:24.480823  906920 main.go:141] libmachine: (kindnet-301514)       <target port='0'/>
	I0520 13:30:24.480830  906920 main.go:141] libmachine: (kindnet-301514)     </serial>
	I0520 13:30:24.480839  906920 main.go:141] libmachine: (kindnet-301514)     <console type='pty'>
	I0520 13:30:24.480849  906920 main.go:141] libmachine: (kindnet-301514)       <target type='serial' port='0'/>
	I0520 13:30:24.480859  906920 main.go:141] libmachine: (kindnet-301514)     </console>
	I0520 13:30:24.480874  906920 main.go:141] libmachine: (kindnet-301514)     <rng model='virtio'>
	I0520 13:30:24.480886  906920 main.go:141] libmachine: (kindnet-301514)       <backend model='random'>/dev/random</backend>
	I0520 13:30:24.480894  906920 main.go:141] libmachine: (kindnet-301514)     </rng>
	I0520 13:30:24.480905  906920 main.go:141] libmachine: (kindnet-301514)     
	I0520 13:30:24.480914  906920 main.go:141] libmachine: (kindnet-301514)     
	I0520 13:30:24.480923  906920 main.go:141] libmachine: (kindnet-301514)   </devices>
	I0520 13:30:24.480932  906920 main.go:141] libmachine: (kindnet-301514) </domain>
	I0520 13:30:24.480943  906920 main.go:141] libmachine: (kindnet-301514) 
	I0520 13:30:24.485189  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:fb:47:1c in network default
	I0520 13:30:24.485776  906920 main.go:141] libmachine: (kindnet-301514) Ensuring networks are active...
	I0520 13:30:24.485808  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:24.486595  906920 main.go:141] libmachine: (kindnet-301514) Ensuring network default is active
	I0520 13:30:24.486926  906920 main.go:141] libmachine: (kindnet-301514) Ensuring network mk-kindnet-301514 is active
	I0520 13:30:24.487596  906920 main.go:141] libmachine: (kindnet-301514) Getting domain xml...
	I0520 13:30:24.488327  906920 main.go:141] libmachine: (kindnet-301514) Creating domain...
	I0520 13:30:25.765419  906920 main.go:141] libmachine: (kindnet-301514) Waiting to get IP...
	I0520 13:30:25.766469  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:25.767059  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:25.767086  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:25.767038  906943 retry.go:31] will retry after 299.088602ms: waiting for machine to come up
	I0520 13:30:26.068432  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:26.069030  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:26.069065  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:26.068979  906943 retry.go:31] will retry after 316.527825ms: waiting for machine to come up
	I0520 13:30:26.387669  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:26.388173  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:26.388205  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:26.388127  906943 retry.go:31] will retry after 394.159ms: waiting for machine to come up
	I0520 13:30:26.783655  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:26.784185  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:26.784213  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:26.784121  906943 retry.go:31] will retry after 467.903678ms: waiting for machine to come up
	I0520 13:30:27.253357  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:27.253851  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:27.253878  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:27.253804  906943 retry.go:31] will retry after 574.175778ms: waiting for machine to come up
	I0520 13:30:27.829129  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:27.829635  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:27.829660  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:27.829585  906943 retry.go:31] will retry after 880.232257ms: waiting for machine to come up
	I0520 13:30:28.711258  906920 main.go:141] libmachine: (kindnet-301514) DBG | domain kindnet-301514 has defined MAC address 52:54:00:a0:03:00 in network mk-kindnet-301514
	I0520 13:30:28.711718  906920 main.go:141] libmachine: (kindnet-301514) DBG | unable to find current IP address of domain kindnet-301514 in network mk-kindnet-301514
	I0520 13:30:28.711748  906920 main.go:141] libmachine: (kindnet-301514) DBG | I0520 13:30:28.711655  906943 retry.go:31] will retry after 750.031656ms: waiting for machine to come up
	I0520 13:30:27.220129  906496 pod_ready.go:92] pod "kube-controller-manager-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:27.220158  906496 pod_ready.go:81] duration metric: took 399.717093ms for pod "kube-controller-manager-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:27.220170  906496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7v7z" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:27.621145  906496 pod_ready.go:92] pod "kube-proxy-s7v7z" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:27.621176  906496 pod_ready.go:81] duration metric: took 400.999642ms for pod "kube-proxy-s7v7z" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:27.621187  906496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:28.019261  906496 pod_ready.go:92] pod "kube-scheduler-pause-587544" in "kube-system" namespace has status "Ready":"True"
	I0520 13:30:28.019293  906496 pod_ready.go:81] duration metric: took 398.097225ms for pod "kube-scheduler-pause-587544" in "kube-system" namespace to be "Ready" ...
	I0520 13:30:28.019306  906496 pod_ready.go:38] duration metric: took 2.555100881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 13:30:28.019330  906496 api_server.go:52] waiting for apiserver process to appear ...
	I0520 13:30:28.019394  906496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:30:28.037277  906496 api_server.go:72] duration metric: took 2.778214016s to wait for apiserver process to appear ...
	I0520 13:30:28.037308  906496 api_server.go:88] waiting for apiserver healthz status ...
	I0520 13:30:28.037331  906496 api_server.go:253] Checking apiserver healthz at https://192.168.61.6:8443/healthz ...
	I0520 13:30:28.042854  906496 api_server.go:279] https://192.168.61.6:8443/healthz returned 200:
	ok
	I0520 13:30:28.043937  906496 api_server.go:141] control plane version: v1.30.1
	I0520 13:30:28.043958  906496 api_server.go:131] duration metric: took 6.642823ms to wait for apiserver health ...
	I0520 13:30:28.043965  906496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 13:30:28.222698  906496 system_pods.go:59] 6 kube-system pods found
	I0520 13:30:28.222734  906496 system_pods.go:61] "coredns-7db6d8ff4d-4pv8h" [86595569-f17c-477e-8be6-1094a6a73be8] Running
	I0520 13:30:28.222741  906496 system_pods.go:61] "etcd-pause-587544" [7219b565-84e4-40f6-9c7a-7847da77a04a] Running
	I0520 13:30:28.222746  906496 system_pods.go:61] "kube-apiserver-pause-587544" [79e78b6d-0a7b-4f59-b5e2-772cdade9f5f] Running
	I0520 13:30:28.222751  906496 system_pods.go:61] "kube-controller-manager-pause-587544" [33f29992-dec6-4077-bc45-64bb7d1e07ec] Running
	I0520 13:30:28.222755  906496 system_pods.go:61] "kube-proxy-s7v7z" [9bb6169f-8624-4bb9-9703-a3b4007b4f24] Running
	I0520 13:30:28.222758  906496 system_pods.go:61] "kube-scheduler-pause-587544" [c9c20432-704d-4dfa-a580-8abdd5b17b5b] Running
	I0520 13:30:28.222766  906496 system_pods.go:74] duration metric: took 178.794555ms to wait for pod list to return data ...
	I0520 13:30:28.222781  906496 default_sa.go:34] waiting for default service account to be created ...
	I0520 13:30:28.420235  906496 default_sa.go:45] found service account: "default"
	I0520 13:30:28.420268  906496 default_sa.go:55] duration metric: took 197.474301ms for default service account to be created ...
	I0520 13:30:28.420279  906496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 13:30:28.623608  906496 system_pods.go:86] 6 kube-system pods found
	I0520 13:30:28.623647  906496 system_pods.go:89] "coredns-7db6d8ff4d-4pv8h" [86595569-f17c-477e-8be6-1094a6a73be8] Running
	I0520 13:30:28.623652  906496 system_pods.go:89] "etcd-pause-587544" [7219b565-84e4-40f6-9c7a-7847da77a04a] Running
	I0520 13:30:28.623661  906496 system_pods.go:89] "kube-apiserver-pause-587544" [79e78b6d-0a7b-4f59-b5e2-772cdade9f5f] Running
	I0520 13:30:28.623665  906496 system_pods.go:89] "kube-controller-manager-pause-587544" [33f29992-dec6-4077-bc45-64bb7d1e07ec] Running
	I0520 13:30:28.623669  906496 system_pods.go:89] "kube-proxy-s7v7z" [9bb6169f-8624-4bb9-9703-a3b4007b4f24] Running
	I0520 13:30:28.623673  906496 system_pods.go:89] "kube-scheduler-pause-587544" [c9c20432-704d-4dfa-a580-8abdd5b17b5b] Running
	I0520 13:30:28.623681  906496 system_pods.go:126] duration metric: took 203.394432ms to wait for k8s-apps to be running ...
	I0520 13:30:28.623691  906496 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 13:30:28.623753  906496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:30:28.639742  906496 system_svc.go:56] duration metric: took 16.038643ms WaitForService to wait for kubelet
	I0520 13:30:28.639779  906496 kubeadm.go:576] duration metric: took 3.380721942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 13:30:28.639805  906496 node_conditions.go:102] verifying NodePressure condition ...
	I0520 13:30:28.819642  906496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 13:30:28.819676  906496 node_conditions.go:123] node cpu capacity is 2
	I0520 13:30:28.819691  906496 node_conditions.go:105] duration metric: took 179.879032ms to run NodePressure ...
	I0520 13:30:28.819706  906496 start.go:240] waiting for startup goroutines ...
	I0520 13:30:28.819715  906496 start.go:245] waiting for cluster config update ...
	I0520 13:30:28.819727  906496 start.go:254] writing updated cluster config ...
	I0520 13:30:28.820104  906496 ssh_runner.go:195] Run: rm -f paused
	I0520 13:30:28.879338  906496 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 13:30:28.881261  906496 out.go:177] * Done! kubectl is now configured to use "pause-587544" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.541907924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211831541884869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=251d1151-6a56-4003-ae30-2751b70e626b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.542505608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbb25974-a45a-4981-ad11-8974863a38be name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.542717624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbb25974-a45a-4981-ad11-8974863a38be name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.542972572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff,PodSandboxId:b206fb941899b59a45b68c85ed3888705597b0026dd3bdafeb22fead525cbcb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211810889612753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816,PodSandboxId:cb325a1ca4b9b0306ece06399685db462f19adb30741c283702634ac2caca57b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211810572849155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449,PodSandboxId:99150cc12c0d96bacc5e3e7c97db0b479168ce23cc1ad5ea2bc3c9240e4c23ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211806712600762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annot
ations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f,PodSandboxId:0e8de9422d55f20808e09b459c8adb4adca8d99076d2c09968b857b673354bf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211806701894821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]
string{io.kubernetes.container.hash: 8491067c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f,PodSandboxId:bca6dc98347ca6e76d65ee7bc6eed00a7d086969879f63d3495beb6487f05269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211806684344619,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernet
es.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47,PodSandboxId:8eaa604737fbe43dc691932ab4b8a5a1b3fccaf8c4ca1af946b9fba8bf8eb255,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211806689661151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7,PodSandboxId:38c5b4d5f317d041ae3296f703257987becf033574dea11006a6a115a5935f55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211802698604112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8491067c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015,PodSandboxId:42fc97161c764f917e3eace5e27f6da40a818f6547c7e29d602f309f44145479,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211802528318397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io.kubernetes
.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83,PodSandboxId:ccc780018d7ca45202b4f85109fc6aabb6349032528c66ac51b3eb4565263255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211802681620872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20
0064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8,PodSandboxId:9c66483e8c8e833981d42dccf2ac3bd9474f4dafe48a7eecfb63e1d0798a704d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211802426778882,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annotations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99,PodSandboxId:72daa8789d7288c868b8caa468e0288a156854a58f3531407707eaa5c334984c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211790312160699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7,PodSandboxId:d2e003ce745c0da788e31ee3dc1b132c283cce7c3e5adc3cf962d22e7f0db94d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211789843945635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbb25974-a45a-4981-ad11-8974863a38be name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.587669412Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cbd5098-c069-401f-a07d-0ee7ec4bdb76 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.587830048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cbd5098-c069-401f-a07d-0ee7ec4bdb76 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.589867634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=314e0a51-13b1-491a-98fd-41d26e6011fa name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.591827683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211831591794260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=314e0a51-13b1-491a-98fd-41d26e6011fa name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.593801494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34a10425-b32a-4773-9db8-5bf729c485e9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.593868154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34a10425-b32a-4773-9db8-5bf729c485e9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.594236189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff,PodSandboxId:b206fb941899b59a45b68c85ed3888705597b0026dd3bdafeb22fead525cbcb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211810889612753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816,PodSandboxId:cb325a1ca4b9b0306ece06399685db462f19adb30741c283702634ac2caca57b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211810572849155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449,PodSandboxId:99150cc12c0d96bacc5e3e7c97db0b479168ce23cc1ad5ea2bc3c9240e4c23ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211806712600762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annot
ations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f,PodSandboxId:0e8de9422d55f20808e09b459c8adb4adca8d99076d2c09968b857b673354bf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211806701894821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]
string{io.kubernetes.container.hash: 8491067c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f,PodSandboxId:bca6dc98347ca6e76d65ee7bc6eed00a7d086969879f63d3495beb6487f05269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211806684344619,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernet
es.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47,PodSandboxId:8eaa604737fbe43dc691932ab4b8a5a1b3fccaf8c4ca1af946b9fba8bf8eb255,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211806689661151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7,PodSandboxId:38c5b4d5f317d041ae3296f703257987becf033574dea11006a6a115a5935f55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211802698604112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8491067c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015,PodSandboxId:42fc97161c764f917e3eace5e27f6da40a818f6547c7e29d602f309f44145479,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211802528318397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io.kubernetes
.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83,PodSandboxId:ccc780018d7ca45202b4f85109fc6aabb6349032528c66ac51b3eb4565263255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211802681620872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20
0064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8,PodSandboxId:9c66483e8c8e833981d42dccf2ac3bd9474f4dafe48a7eecfb63e1d0798a704d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211802426778882,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annotations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99,PodSandboxId:72daa8789d7288c868b8caa468e0288a156854a58f3531407707eaa5c334984c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211790312160699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7,PodSandboxId:d2e003ce745c0da788e31ee3dc1b132c283cce7c3e5adc3cf962d22e7f0db94d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211789843945635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34a10425-b32a-4773-9db8-5bf729c485e9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.640577176Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90e2a922-3e31-4e2e-a685-2cc710d8fa7d name=/runtime.v1.RuntimeService/Version
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.640648741Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90e2a922-3e31-4e2e-a685-2cc710d8fa7d name=/runtime.v1.RuntimeService/Version
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.642026666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=452a229d-dde0-4acf-897b-ee9cc842d492 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.642446821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211831642422553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=452a229d-dde0-4acf-897b-ee9cc842d492 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.643130871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98aa7d7c-d33c-47dc-9271-e62fa8b648d3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.643261404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98aa7d7c-d33c-47dc-9271-e62fa8b648d3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.643630983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff,PodSandboxId:b206fb941899b59a45b68c85ed3888705597b0026dd3bdafeb22fead525cbcb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211810889612753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816,PodSandboxId:cb325a1ca4b9b0306ece06399685db462f19adb30741c283702634ac2caca57b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211810572849155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449,PodSandboxId:99150cc12c0d96bacc5e3e7c97db0b479168ce23cc1ad5ea2bc3c9240e4c23ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211806712600762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annot
ations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f,PodSandboxId:0e8de9422d55f20808e09b459c8adb4adca8d99076d2c09968b857b673354bf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211806701894821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]
string{io.kubernetes.container.hash: 8491067c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f,PodSandboxId:bca6dc98347ca6e76d65ee7bc6eed00a7d086969879f63d3495beb6487f05269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211806684344619,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernet
es.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47,PodSandboxId:8eaa604737fbe43dc691932ab4b8a5a1b3fccaf8c4ca1af946b9fba8bf8eb255,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211806689661151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7,PodSandboxId:38c5b4d5f317d041ae3296f703257987becf033574dea11006a6a115a5935f55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211802698604112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8491067c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015,PodSandboxId:42fc97161c764f917e3eace5e27f6da40a818f6547c7e29d602f309f44145479,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211802528318397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io.kubernetes
.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83,PodSandboxId:ccc780018d7ca45202b4f85109fc6aabb6349032528c66ac51b3eb4565263255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211802681620872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20
0064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8,PodSandboxId:9c66483e8c8e833981d42dccf2ac3bd9474f4dafe48a7eecfb63e1d0798a704d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211802426778882,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annotations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99,PodSandboxId:72daa8789d7288c868b8caa468e0288a156854a58f3531407707eaa5c334984c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211790312160699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7,PodSandboxId:d2e003ce745c0da788e31ee3dc1b132c283cce7c3e5adc3cf962d22e7f0db94d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211789843945635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98aa7d7c-d33c-47dc-9271-e62fa8b648d3 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.697306879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9c43594-0cb1-4ec1-9fbc-05102566f311 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.697379947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9c43594-0cb1-4ec1-9fbc-05102566f311 name=/runtime.v1.RuntimeService/Version
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.699024008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4327299-b622-483a-8111-b6a6c4880ae9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.699464185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716211831699439159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4327299-b622-483a-8111-b6a6c4880ae9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.700159129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c9d5ea0-b8f5-47ee-8289-7e6e32f4dc69 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.700298326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c9d5ea0-b8f5-47ee-8289-7e6e32f4dc69 name=/runtime.v1.RuntimeService/ListContainers
	May 20 13:30:31 pause-587544 crio[2682]: time="2024-05-20 13:30:31.700560663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff,PodSandboxId:b206fb941899b59a45b68c85ed3888705597b0026dd3bdafeb22fead525cbcb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716211810889612753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816,PodSandboxId:cb325a1ca4b9b0306ece06399685db462f19adb30741c283702634ac2caca57b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716211810572849155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449,PodSandboxId:99150cc12c0d96bacc5e3e7c97db0b479168ce23cc1ad5ea2bc3c9240e4c23ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716211806712600762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annot
ations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f,PodSandboxId:0e8de9422d55f20808e09b459c8adb4adca8d99076d2c09968b857b673354bf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716211806701894821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]
string{io.kubernetes.container.hash: 8491067c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f,PodSandboxId:bca6dc98347ca6e76d65ee7bc6eed00a7d086969879f63d3495beb6487f05269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716211806684344619,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernet
es.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47,PodSandboxId:8eaa604737fbe43dc691932ab4b8a5a1b3fccaf8c4ca1af946b9fba8bf8eb255,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716211806689661151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7,PodSandboxId:38c5b4d5f317d041ae3296f703257987becf033574dea11006a6a115a5935f55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716211802698604112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a951917d0ac3bd7a752b2c26d9099db,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8491067c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015,PodSandboxId:42fc97161c764f917e3eace5e27f6da40a818f6547c7e29d602f309f44145479,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716211802528318397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d30c9795953e9ca64ab7abc47a62908,},Annotations:map[string]string{io.kubernetes
.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83,PodSandboxId:ccc780018d7ca45202b4f85109fc6aabb6349032528c66ac51b3eb4565263255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716211802681620872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561e00411b3e8693eb8ec85d813c9e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20
0064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8,PodSandboxId:9c66483e8c8e833981d42dccf2ac3bd9474f4dafe48a7eecfb63e1d0798a704d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716211802426778882,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-587544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7184f63dab8462b768f7219ec79eab,},Annotations:map[string]string{io.kubernetes.container.hash: ccd4762c,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99,PodSandboxId:72daa8789d7288c868b8caa468e0288a156854a58f3531407707eaa5c334984c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716211790312160699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4pv8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86595569-f17c-477e-8be6-1094a6a73be8,},Annotations:map[string]string{io.kubernetes.container.hash: 1f55c942,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7,PodSandboxId:d2e003ce745c0da788e31ee3dc1b132c283cce7c3e5adc3cf962d22e7f0db94d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716211789843945635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7v7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bb6169f-8624-4bb9-9703-a3b4007b4f24,},Annotations:map[string]string{io.kubernetes.container.hash: 605a24d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c9d5ea0-b8f5-47ee-8289-7e6e32f4dc69 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8bd8e14647d2e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   1                   b206fb941899b       coredns-7db6d8ff4d-4pv8h
	f910001e4f2db       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   21 seconds ago      Running             kube-proxy                1                   cb325a1ca4b9b       kube-proxy-s7v7z
	f3c710c9e3948       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago      Running             etcd                      2                   99150cc12c0d9       etcd-pause-587544
	cf9235585eac9       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   25 seconds ago      Running             kube-apiserver            2                   0e8de9422d55f       kube-apiserver-pause-587544
	677c880f54fce       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   25 seconds ago      Running             kube-controller-manager   2                   8eaa604737fbe       kube-controller-manager-pause-587544
	426e57513abec       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   25 seconds ago      Running             kube-scheduler            2                   bca6dc98347ca       kube-scheduler-pause-587544
	fa7d92021d0bb       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   29 seconds ago      Exited              kube-apiserver            1                   38c5b4d5f317d       kube-apiserver-pause-587544
	8e482a4ef5144       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   29 seconds ago      Exited              kube-scheduler            1                   ccc780018d7ca       kube-scheduler-pause-587544
	dd0fdc485f85f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   29 seconds ago      Exited              kube-controller-manager   1                   42fc97161c764       kube-controller-manager-pause-587544
	00c3af83ad0ee       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Exited              etcd                      1                   9c66483e8c8e8       etcd-pause-587544
	2c07e95b8560a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   41 seconds ago      Exited              coredns                   0                   72daa8789d728       coredns-7db6d8ff4d-4pv8h
	04cec37cef9fa       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   41 seconds ago      Exited              kube-proxy                0                   d2e003ce745c0       kube-proxy-s7v7z
	
	
	==> coredns [2c07e95b8560a0c73c6873d90fefdb636869e66ecf19390a14b38b6769c7bf99] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40900 - 2324 "HINFO IN 8476660019988762421.5394260937817126370. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013051081s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8bd8e14647d2e2bd17b8f30cd9e80e78a622323109250d79beb71e61213c9dff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52214 - 19952 "HINFO IN 5349525857651037922.1821123622764411973. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029849038s
	
	
	==> describe nodes <==
	Name:               pause-587544
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-587544
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=pause-587544
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T13_29_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 13:29:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-587544
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 13:30:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 13:30:10 +0000   Mon, 20 May 2024 13:29:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 13:30:10 +0000   Mon, 20 May 2024 13:29:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 13:30:10 +0000   Mon, 20 May 2024 13:29:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 13:30:10 +0000   Mon, 20 May 2024 13:29:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.6
	  Hostname:    pause-587544
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b66f57f737c549779dcab985441bb9bd
	  System UUID:                b66f57f7-37c5-4977-9dca-b985441bb9bd
	  Boot ID:                    31a89234-52e9-4426-8c92-da0a9007a676
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4pv8h                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     43s
	  kube-system                 etcd-pause-587544                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         57s
	  kube-system                 kube-apiserver-pause-587544             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-controller-manager-pause-587544    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-proxy-s7v7z                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-scheduler-pause-587544             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     57s                kubelet          Node pause-587544 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  57s                kubelet          Node pause-587544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s                kubelet          Node pause-587544 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeReady                55s                kubelet          Node pause-587544 status is now: NodeReady
	  Normal  RegisteredNode           44s                node-controller  Node pause-587544 event: Registered Node pause-587544 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-587544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-587544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-587544 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-587544 event: Registered Node pause-587544 in Controller
	
	
	==> dmesg <==
	[ +13.101316] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.057470] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059446] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.177312] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.138999] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.319888] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.648002] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.065554] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.525015] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.712493] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.228878] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.614213] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[ +13.396682] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +0.106445] kauditd_printk_skb: 15 callbacks suppressed
	[May20 13:30] systemd-fstab-generator[2143]: Ignoring "noauto" option for root device
	[  +0.126634] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.086524] systemd-fstab-generator[2155]: Ignoring "noauto" option for root device
	[  +0.241855] systemd-fstab-generator[2169]: Ignoring "noauto" option for root device
	[  +0.150873] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.880829] systemd-fstab-generator[2456]: Ignoring "noauto" option for root device
	[  +1.125651] systemd-fstab-generator[2777]: Ignoring "noauto" option for root device
	[  +2.169665] systemd-fstab-generator[3080]: Ignoring "noauto" option for root device
	[  +0.358619] kauditd_printk_skb: 239 callbacks suppressed
	[ +16.151857] kauditd_printk_skb: 37 callbacks suppressed
	[  +2.793298] systemd-fstab-generator[3620]: Ignoring "noauto" option for root device
	
	
	==> etcd [00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8] <==
	{"level":"warn","ts":"2024-05-20T13:30:02.962724Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-05-20T13:30:02.96289Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.61.6:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.61.6:2380","--initial-cluster=pause-587544=https://192.168.61.6:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.61.6:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.61.6:2380","--name=pause-587544","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file
=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-05-20T13:30:02.963041Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-05-20T13:30:02.963095Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-05-20T13:30:02.963123Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.61.6:2380"]}
	{"level":"info","ts":"2024-05-20T13:30:02.963172Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:30:02.96406Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.6:2379"]}
	{"level":"info","ts":"2024-05-20T13:30:02.964372Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-587544","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.61.6:2380"],"listen-peer-urls":["https://192.168.61.6:2380"],"advertise-client-urls":["https://192.168.61.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-to
ken":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-05-20T13:30:02.972683Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"8.087724ms"}
	
	
	==> etcd [f3c710c9e394868abdbf582b0bfe4d488398a064b9f2442b2c99686766fd3449] <==
	{"level":"info","ts":"2024-05-20T13:30:07.059695Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:07.059793Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T13:30:07.060254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 switched to configuration voters=(14345459793014229238)"}
	{"level":"info","ts":"2024-05-20T13:30:07.060547Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"75789e35a2c78ec","local-member-id":"c7154fad1e4308f6","added-peer-id":"c7154fad1e4308f6","added-peer-peer-urls":["https://192.168.61.6:2380"]}
	{"level":"info","ts":"2024-05-20T13:30:07.061357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"75789e35a2c78ec","local-member-id":"c7154fad1e4308f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:30:07.061728Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T13:30:07.073581Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T13:30:07.076664Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.6:2380"}
	{"level":"info","ts":"2024-05-20T13:30:07.079266Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.6:2380"}
	{"level":"info","ts":"2024-05-20T13:30:07.079481Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c7154fad1e4308f6","initial-advertise-peer-urls":["https://192.168.61.6:2380"],"listen-peer-urls":["https://192.168.61.6:2380"],"advertise-client-urls":["https://192.168.61.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T13:30:07.08043Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T13:30:08.621441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:08.62148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:08.621513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 received MsgPreVoteResp from c7154fad1e4308f6 at term 2"}
	{"level":"info","ts":"2024-05-20T13:30:08.621526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:08.621532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 received MsgVoteResp from c7154fad1e4308f6 at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:08.62154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7154fad1e4308f6 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:08.621581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c7154fad1e4308f6 elected leader c7154fad1e4308f6 at term 3"}
	{"level":"info","ts":"2024-05-20T13:30:08.627639Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c7154fad1e4308f6","local-member-attributes":"{Name:pause-587544 ClientURLs:[https://192.168.61.6:2379]}","request-path":"/0/members/c7154fad1e4308f6/attributes","cluster-id":"75789e35a2c78ec","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T13:30:08.627687Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:30:08.627909Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T13:30:08.627962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T13:30:08.628Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T13:30:08.629849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.6:2379"}
	{"level":"info","ts":"2024-05-20T13:30:08.629848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:30:32 up 1 min,  0 users,  load average: 1.02, 0.36, 0.13
	Linux pause-587544 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cf9235585eac9c77ae77ea4176ecee5b1317b9bf9f779ad748d439b0fde7696f] <==
	I0520 13:30:10.057055       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 13:30:10.070432       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 13:30:10.070611       1 aggregator.go:165] initial CRD sync complete...
	I0520 13:30:10.070643       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 13:30:10.070671       1 cache.go:32] Waiting for caches to sync for autoregister controller
	E0520 13:30:10.110166       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 13:30:10.140700       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 13:30:10.143014       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 13:30:10.143032       1 policy_source.go:224] refreshing policies
	I0520 13:30:10.147588       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 13:30:10.147680       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 13:30:10.149003       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 13:30:10.152647       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 13:30:10.155983       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 13:30:10.156029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 13:30:10.165409       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 13:30:10.171627       1 cache.go:39] Caches are synced for autoregister controller
	I0520 13:30:10.963147       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 13:30:11.527631       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 13:30:11.543897       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 13:30:11.591963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 13:30:11.627327       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 13:30:11.636029       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 13:30:22.507735       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 13:30:22.619393       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7] <==
	
	
	==> kube-controller-manager [677c880f54fced2a02c257a1043979bffdffa749684dea11dfefe442238a2c47] <==
	I0520 13:30:22.482560       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0520 13:30:22.484127       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0520 13:30:22.484347       1 shared_informer.go:320] Caches are synced for disruption
	I0520 13:30:22.485489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0520 13:30:22.486783       1 shared_informer.go:320] Caches are synced for ephemeral
	I0520 13:30:22.486983       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0520 13:30:22.487885       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0520 13:30:22.490116       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0520 13:30:22.492379       1 shared_informer.go:320] Caches are synced for PV protection
	I0520 13:30:22.499498       1 shared_informer.go:320] Caches are synced for job
	I0520 13:30:22.501263       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0520 13:30:22.501432       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.723µs"
	I0520 13:30:22.504400       1 shared_informer.go:320] Caches are synced for persistent volume
	I0520 13:30:22.568423       1 shared_informer.go:320] Caches are synced for taint
	I0520 13:30:22.568683       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0520 13:30:22.568798       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-587544"
	I0520 13:30:22.568907       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0520 13:30:22.577896       1 shared_informer.go:320] Caches are synced for daemon sets
	I0520 13:30:22.606899       1 shared_informer.go:320] Caches are synced for endpoint
	I0520 13:30:22.618363       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 13:30:22.639139       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 13:30:22.670181       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 13:30:23.068879       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 13:30:23.069016       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 13:30:23.102680       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015] <==
	
	
	==> kube-proxy [04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7] <==
	I0520 13:29:50.095516       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:29:50.116370       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.6"]
	I0520 13:29:50.198954       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:29:50.198993       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:29:50.199009       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:29:50.203604       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:29:50.203758       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:29:50.203771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:29:50.205153       1 config.go:192] "Starting service config controller"
	I0520 13:29:50.205170       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:29:50.205467       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:29:50.205477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:29:50.205911       1 config.go:319] "Starting node config controller"
	I0520 13:29:50.205926       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:29:50.306425       1 shared_informer.go:320] Caches are synced for node config
	I0520 13:29:50.306457       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:29:50.306457       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f910001e4f2db2d2be4e2a75bab9a85ec91fde33cb0bf6a9e5ddc76d90fc8816] <==
	I0520 13:30:10.756096       1 server_linux.go:69] "Using iptables proxy"
	I0520 13:30:10.772448       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.6"]
	I0520 13:30:10.851359       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 13:30:10.851396       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 13:30:10.851429       1 server_linux.go:165] "Using iptables Proxier"
	I0520 13:30:10.857583       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 13:30:10.857800       1 server.go:872] "Version info" version="v1.30.1"
	I0520 13:30:10.857831       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:30:10.859085       1 config.go:192] "Starting service config controller"
	I0520 13:30:10.859125       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 13:30:10.859172       1 config.go:101] "Starting endpoint slice config controller"
	I0520 13:30:10.859177       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 13:30:10.861288       1 config.go:319] "Starting node config controller"
	I0520 13:30:10.861315       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 13:30:10.959893       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 13:30:10.960035       1 shared_informer.go:320] Caches are synced for service config
	I0520 13:30:10.961671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [426e57513abec66d09178f262f3f4611a29282faa86de7e7838e358ad5561a2f] <==
	I0520 13:30:07.474297       1 serving.go:380] Generated self-signed cert in-memory
	W0520 13:30:10.039369       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 13:30:10.039528       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 13:30:10.039559       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 13:30:10.039646       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 13:30:10.083518       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 13:30:10.083616       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 13:30:10.091657       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 13:30:10.091767       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 13:30:10.096127       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 13:30:10.091789       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 13:30:10.199427       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83] <==
	
	
	==> kubelet <==
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.555341    3087 kubelet_node_status.go:73] "Attempting to register node" node="pause-587544"
	May 20 13:30:06 pause-587544 kubelet[3087]: E0520 13:30:06.556396    3087 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.6:8443: connect: connection refused" node="pause-587544"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.664307    3087 scope.go:117] "RemoveContainer" containerID="8e482a4ef514497e8bb1ee44e8250f6759aaddb784f15101ff1163f71ce93f83"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.666416    3087 scope.go:117] "RemoveContainer" containerID="00c3af83ad0eeb1d3e43cd4abf63b816438a1f54ec2f3d0a0638520fe38b60a8"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.666952    3087 scope.go:117] "RemoveContainer" containerID="fa7d92021d0bb68653bba929fb9961a2ebe333c393aab8065d8f28f98ed793a7"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.668053    3087 scope.go:117] "RemoveContainer" containerID="dd0fdc485f85f8aec7f391e5fd9b92eac08246617b6c70cd27f0239e15616015"
	May 20 13:30:06 pause-587544 kubelet[3087]: E0520 13:30:06.858556    3087 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-587544?timeout=10s\": dial tcp 192.168.61.6:8443: connect: connection refused" interval="800ms"
	May 20 13:30:06 pause-587544 kubelet[3087]: I0520 13:30:06.958340    3087 kubelet_node_status.go:73] "Attempting to register node" node="pause-587544"
	May 20 13:30:06 pause-587544 kubelet[3087]: E0520 13:30:06.959264    3087 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.6:8443: connect: connection refused" node="pause-587544"
	May 20 13:30:07 pause-587544 kubelet[3087]: W0520 13:30:07.083663    3087 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.6:8443: connect: connection refused
	May 20 13:30:07 pause-587544 kubelet[3087]: E0520 13:30:07.083778    3087 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.6:8443: connect: connection refused
	May 20 13:30:07 pause-587544 kubelet[3087]: I0520 13:30:07.761929    3087 kubelet_node_status.go:73] "Attempting to register node" node="pause-587544"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.206537    3087 kubelet_node_status.go:112] "Node was previously registered" node="pause-587544"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.206913    3087 kubelet_node_status.go:76] "Successfully registered node" node="pause-587544"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.208292    3087 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.209426    3087 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.240954    3087 apiserver.go:52] "Watching apiserver"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.245471    3087 topology_manager.go:215] "Topology Admit Handler" podUID="86595569-f17c-477e-8be6-1094a6a73be8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4pv8h"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.245636    3087 topology_manager.go:215] "Topology Admit Handler" podUID="9bb6169f-8624-4bb9-9703-a3b4007b4f24" podNamespace="kube-system" podName="kube-proxy-s7v7z"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.254590    3087 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.319863    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bb6169f-8624-4bb9-9703-a3b4007b4f24-xtables-lock\") pod \"kube-proxy-s7v7z\" (UID: \"9bb6169f-8624-4bb9-9703-a3b4007b4f24\") " pod="kube-system/kube-proxy-s7v7z"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.320036    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bb6169f-8624-4bb9-9703-a3b4007b4f24-lib-modules\") pod \"kube-proxy-s7v7z\" (UID: \"9bb6169f-8624-4bb9-9703-a3b4007b4f24\") " pod="kube-system/kube-proxy-s7v7z"
	May 20 13:30:10 pause-587544 kubelet[3087]: E0520 13:30:10.415624    3087 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-587544\" already exists" pod="kube-system/kube-apiserver-pause-587544"
	May 20 13:30:10 pause-587544 kubelet[3087]: I0520 13:30:10.547315    3087 scope.go:117] "RemoveContainer" containerID="04cec37cef9fa3634492543beff5a3bf70b6d0bddcd927b936126d1c9220d6b7"
	May 20 13:30:12 pause-587544 kubelet[3087]: I0520 13:30:12.825556    3087 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-587544 -n pause-587544
helpers_test.go:261: (dbg) Run:  kubectl --context pause-587544 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (40.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.057s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
E0520 13:51:43.350210  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/auto-301514/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.45:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (26m11s)
	TestNetworkPlugins/group (17m50s)
	TestStartStop (23m42s)
	TestStartStop/group/default-k8s-diff-port (15m23s)
	TestStartStop/group/default-k8s-diff-port/serial (15m23s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (3m0s)
	TestStartStop/group/embed-certs (17m51s)
	TestStartStop/group/embed-certs/serial (17m51s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (3m40s)
	TestStartStop/group/no-preload (18m2s)
	TestStartStop/group/no-preload/serial (18m2s)
	TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (2m32s)
	TestStartStop/group/old-k8s-version (18m50s)
	TestStartStop/group/old-k8s-version/serial (18m50s)
	TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (34s)

                                                
                                                
goroutine 4142 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0004ecd00, 0xc001117bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0008822a0, {0x4941b20, 0x2b, 0x2b}, {0x26a2ff3?, 0xc000984900?, 0x49fe280?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00087ab40)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00087ab40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 12 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006daf00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 3183 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc001906f50, 0xc001110f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0xee?, 0xc001906f50, 0xc001906f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc001700b60?, 0x551c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001906fd0?, 0x593064?, 0xc001906fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3216
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 21 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 20
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2527 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001122480, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2525
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2842 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00064b810, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001ab08a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00064b840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012007b0, {0x3612ca0, 0xc001add9b0}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012007b0, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2801
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3959 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001122a40, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3957
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2526 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001ab0d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2525
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2758 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a67380, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2737
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1686 [chan receive, 26 minutes]:
testing.(*T).Run(0xc00158a000, {0x2648b06?, 0x55149c?}, 0xc000650588)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00158a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00158a000, 0x30b9808)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2285 [chan receive, 15 minutes]:
testing.(*T).Run(0xc001701380, {0x264a09d?, 0x0?}, 0xc00111c800)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001701380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001701380, 0xc0009743c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 601 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc0019ebe60)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 598
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 396 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008dc9c0, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 352
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2282 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001700d00, 0x30b9a28)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1752
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3957 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36369a0, 0xc00049e150}, {0x362a040, 0xc000a88060}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36369a0?, 0xc00074a230?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36369a0, 0xc00074a230}, 0xc00198d380, {0xc001b8c618, 0x16}, {0x266e8f0, 0x14}, {0x2686397, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36369a0, 0xc00074a230}, 0xc00198d380, {0xc001b8c618, 0x16}, {0x265fd50?, 0xc00157af60?}, {0x551353?, 0x4a16cf?}, {0xc001dce180, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00198d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00198d380, 0xc0012ec000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2814
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1811 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0008187d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc0000f64e0, 0xc000650588)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1686
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2800 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001ab09c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2799
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2844 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2843
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2315 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001537b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2326
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2286 [chan receive, 18 minutes]:
testing.(*T).Run(0xc001701520, {0x264a09d?, 0x0?}, 0xc000973380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001701520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001701520, 0xc000974480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 230 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f3ac0cb1730, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000888100)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000888100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000826800)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000826800)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00086a0f0, {0x3629980, 0xc000826800})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc00086a0f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00158a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 227
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 432 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 431
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 502 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a594a0, 0xc001942a80)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 534
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2355 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2354
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2867 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001121560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2860
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2873 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 395 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001537ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 352
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2611 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2610
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2316 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0017b4800, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2326
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 384 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001644c60, 0xc0016334a0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 343
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2872 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc000507750, 0xc00110df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0xd3?, 0xc000507750, 0xc000507798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc000657590?, 0xc000657590?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001ed8c30?, 0x0?, 0xc00182dcb0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2868
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2499 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc001221f50, 0xc001172f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0x7?, 0xc001221f50, 0xc001221f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc00198c000?, 0x551c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001221fd0?, 0x593064?, 0xc001288180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2421
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3342 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001b229c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3338
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2868 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0017b45c0, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2860
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 404 [chan send, 77 minutes]:
os/exec.(*Cmd).watchCtx(0xc0011274a0, 0xc0002b16e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 403
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2721 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2720
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1752 [chan receive, 23 minutes]:
testing.(*T).Run(0xc00158b6c0, {0x2648b06?, 0x551353?}, 0x30b9a28)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00158b6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00158b6c0, 0x30b9850)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2498 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0009749d0, 0x14)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001605380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000974a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001974cb0, {0x3612ca0, 0xc0012882a0}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001974cb0, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2421
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2283 [chan receive, 19 minutes]:
testing.(*T).Run(0xc001700ea0, {0x264a09d?, 0x0?}, 0xc001ab4580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001700ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001700ea0, 0xc000974340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 600 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc0019ebe60)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 598
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 430 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008dc990, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001537da0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008dc9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001201920, {0x3612ca0, 0xc0016231d0}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001201920, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 396
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 431 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc001565f50, 0xc00152ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0xc0?, 0xc001565f50, 0xc001565f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0x6c696e3c5b203a67?, 0x32353049090a5d3e?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593005?, 0xc0015b14a0?, 0xc0008e1bc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 396
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3265 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001a621a0, {0x2674603?, 0x60400000004?}, 0xc0012eca00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a621a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001a621a0, 0xc00111c800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2285
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3308 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3307
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3950 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001122a10, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00188b260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001122a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001aac380, {0x3612ca0, 0xc0016260f0}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001aac380, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3959
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3186 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001a62340, {0x2674603?, 0x60400000004?}, 0xc00111c980)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a62340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001a62340, 0xc001ab4180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2288
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2871 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0017b4590, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001121440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0017b45c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001974b70, {0x3612ca0, 0xc001312f90}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001974b70, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2868
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2719 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc001a67350, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001233320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a67380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00085dca0, {0x3612ca0, 0xc001c02b70}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00085dca0, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2758
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2720 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc001564750, 0xc001564798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0xc0?, 0xc001564750, 0xc001564798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc0015647b0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015647d0?, 0x593064?, 0xc0019423c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2758
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2814 [chan receive]:
testing.(*T).Run(0xc0008a6ea0, {0x2674603?, 0x60400000004?}, 0xc0012ec000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0008a6ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0008a6ea0, 0xc001ab4580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2283
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2273 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0017b47d0, 0x15)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001537a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0017b4800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00061c110, {0x3612ca0, 0xc0012882d0}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00061c110, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2316
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2843 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc001562f50, 0xc001562f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0x40?, 0xc001562f50, 0xc001562f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc001562fb0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593005?, 0xc001127080?, 0xc00149cb40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2801
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2593 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc001122450, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001ab0c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001122480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008b3880, {0x3612ca0, 0xc001118930}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008b3880, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2527
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3477 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36369a0, 0xc0004a8310}, {0x362a040, 0xc001c06980}, 0x1, 0x0, 0xc0011ebc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36369a0?, 0xc000652150?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36369a0, 0xc000652150}, 0xc0004ed520, {0xc001a3a7f8, 0x12}, {0x266e8f0, 0x14}, {0x2686397, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36369a0, 0xc000652150}, 0xc0004ed520, {0xc001a3a7f8, 0x12}, {0x2655e04?, 0xc001221760?}, {0x551353?, 0x4a16cf?}, {0xc00087c500, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0004ed520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0004ed520, 0xc00111c980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3186
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3046 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001701040, {0x2674603?, 0x60400000004?}, 0xc000412700)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001701040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001701040, 0xc000973380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2286
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2420 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0016054a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2419
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2354 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0x0?, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc00158a000?, 0x551c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593005?, 0xc0014ba2c0?, 0xc0008e0f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2316
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2288 [chan receive, 17 minutes]:
testing.(*T).Run(0xc001701860, {0x264a09d?, 0x0?}, 0xc001ab4180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001701860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001701860, 0xc000974580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2500 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2499
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2421 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000974a00, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2419
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2610 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc000095750, 0xc0000adf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0xa0?, 0xc000095750, 0xc000095798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc001263e60?, 0xc001ab8280?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000957d0?, 0x593064?, 0xc0008e08a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2527
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3591 [IO wait]:
internal/poll.runtime_pollWait(0x7f3ac0cb1638, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000413600?, 0xc0011af000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000413600, {0xc0011af000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000413600, {0xc0011af000?, 0xc001316f00?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0006a1060, {0xc0011af000?, 0xc0011af05f?, 0x6f?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc00120b980, {0xc0011af000?, 0x0?, 0xc00120b980?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc001a34d30, {0x3613440, 0xc00120b980})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001a34a88, {0x7f3ac01c6c58, 0xc000817548}, 0xc001535980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001a34a88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001a34a88, {0xc00134f000, 0x1000, 0xc001961500?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00188aba0, {0xc0001159a0, 0x9, 0x48fdc00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3611920, 0xc00188aba0}, {0xc0001159a0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0001159a0, 0x9, 0x1535dc0?}, {0x3611920?, 0xc00188aba0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000115960)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001535fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2442 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001309380)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2338 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 3590
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 2801 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00064b840, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2799
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2757 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001233440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2737
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2923 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001b23380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2914
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2864 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0011225d0, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001b23260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001122600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001372920, {0x3612ca0, 0xc001c037a0}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001372920, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2924
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3182 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001122810, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001605080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001122840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008b2850, {0x3612ca0, 0xc001b4c4b0}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008b2850, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3216
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2930 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2865
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2924 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001122600, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2914
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2865 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc000508f50, 0xc000508f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0xd0?, 0xc000508f50, 0xc000508f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc000508fb0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000508fd0?, 0x593064?, 0xc0014f2210?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2924
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3306 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc00064b690, 0x2)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2139c20?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001b227e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00064b6c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00182c270, {0x3612ca0, 0xc00124ebd0}, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00182c270, 0x3b9aca00, 0x0, 0x1, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3184 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3183
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3216 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001122840, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3583 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36369a0, 0xc00041f110}, {0x362a040, 0xc001501360}, 0x1, 0x0, 0xc0011e7c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36369a0?, 0xc0004a8230?}, 0x3b9aca00, 0xc0012abe10?, 0x1, 0xc0012abc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36369a0, 0xc0004a8230}, 0xc001701a00, {0xc001817770, 0x11}, {0x266e8f0, 0x14}, {0x2686397, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36369a0, 0xc0004a8230}, 0xc001701a00, {0xc001817770, 0x11}, {0x2653c28?, 0xc00157af60?}, {0x551353?, 0x4a16cf?}, {0xc00125e500, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001701a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001701a00, 0xc000412700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3046
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3307 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc00121df50, 0xc00121df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0xc0?, 0xc00121df50, 0xc00121df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc00121dfb0?, 0x99cff8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00121dfd0?, 0x593064?, 0xc0008e12c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3215 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0016051a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3538 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36369a0, 0xc000474850}, {0x362a040, 0xc001824d60}, 0x1, 0x0, 0xc0012abc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36369a0?, 0xc0004a0230?}, 0x3b9aca00, 0xc001191e10?, 0x1, 0xc001191c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36369a0, 0xc0004a0230}, 0xc00198c000, {0xc00187c020, 0x1c}, {0x266e8f0, 0x14}, {0x2686397, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36369a0, 0xc0004a0230}, 0xc00198c000, {0xc00187c020, 0x1c}, {0x26717ac?, 0xc00157c760?}, {0x551353?, 0x4a16cf?}, {0xc0001cbc00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00198c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00198c000, 0xc0012eca00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3265
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3343 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00064b6c0, 0xc000060de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3338
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3484 [IO wait]:
internal/poll.runtime_pollWait(0x7f3ac0cb09a0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00111d780?, 0xc0011ae800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00111d780, {0xc0011ae800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00111d780, {0xc0011ae800?, 0x7f3ac01de618?, 0xc001251d88?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0006a0338, {0xc0011ae800?, 0xc00118a938?, 0x41467b?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc001251d88, {0xc0011ae800?, 0x0?, 0xc001251d88?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00020deb0, {0x3613440, 0xc001251d88})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00020dc08, {0x3612820, 0xc0006a0338}, 0xc00118a980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00020dc08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00020dc08, {0xc001332000, 0x1000, 0xc0013e5340?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0012329c0, {0xc000115460, 0x9, 0x48fdc00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3611920, 0xc0012329c0}, {0xc000115460, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000115460, 0x9, 0x118adc0?}, {0x3611920?, 0xc0012329c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000115420)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00118afa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2442 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00182e180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2338 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 3483
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 3958 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00188b380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3957
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3951 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3636b60, 0xc000060de0}, 0xc000507750, 0xc000507798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3636b60, 0xc000060de0}, 0xd3?, 0xc000507750, 0xc000507798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3636b60?, 0xc000060de0?}, 0xc000657590?, 0xc000657590?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001ed8c30?, 0x0?, 0xc00182dcb0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3959
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3464 [IO wait]:
internal/poll.runtime_pollWait(0x7f3ac0cb0e78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0012ed700?, 0xc0012a0000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0012ed700, {0xc0012a0000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0012ed700, {0xc0012a0000?, 0xc0006ea640?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001a36008, {0xc0012a0000?, 0xc0012a005f?, 0x6f?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc00120b8a8, {0xc0012a0000?, 0x0?, 0xc00120b8a8?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00012b0b0, {0x3613440, 0xc00120b8a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00012ae08, {0x7f3ac01c6c58, 0xc001284000}, 0xc00110e980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00012ae08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00012ae08, {0xc0012b1000, 0x1000, 0xc0013e5340?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00093d800, {0xc0016422e0, 0x9, 0x48fdc00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3611920, 0xc00093d800}, {0xc0016422e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0016422e0, 0x9, 0x110edc0?}, {0x3611920?, 0xc00093d800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0016422a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00110efa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2442 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000002780)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2338 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 3463
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 3952 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3951
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (162/207)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.1/json-events 3.6
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.05
18 TestDownloadOnly/v1.30.1/DeleteAll 0.13
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 98.11
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 48.78
29 TestCertExpiration 320.62
31 TestForceSystemdFlag 53.23
32 TestForceSystemdEnv 49.23
34 TestKVMDriverInstallOrUpdate 1.05
39 TestErrorSpam/start 0.33
40 TestErrorSpam/status 0.71
41 TestErrorSpam/pause 1.54
42 TestErrorSpam/unpause 1.51
43 TestErrorSpam/stop 4.77
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 95.64
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 34.35
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.07
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
55 TestFunctional/serial/CacheCmd/cache/add_local 1.2
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.11
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 35.6
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 1.38
66 TestFunctional/serial/LogsFileCmd 1.41
67 TestFunctional/serial/InvalidService 3.95
69 TestFunctional/parallel/ConfigCmd 0.35
70 TestFunctional/parallel/DashboardCmd 15.37
71 TestFunctional/parallel/DryRun 0.32
72 TestFunctional/parallel/InternationalLanguage 0.16
73 TestFunctional/parallel/StatusCmd 0.9
77 TestFunctional/parallel/ServiceCmdConnect 17.52
78 TestFunctional/parallel/AddonsCmd 0.13
79 TestFunctional/parallel/PersistentVolumeClaim 40.67
81 TestFunctional/parallel/SSHCmd 0.43
82 TestFunctional/parallel/CpCmd 1.26
83 TestFunctional/parallel/MySQL 26.09
84 TestFunctional/parallel/FileSync 0.24
85 TestFunctional/parallel/CertSync 1.45
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
93 TestFunctional/parallel/License 0.18
94 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
95 TestFunctional/parallel/ProfileCmd/profile_list 0.33
96 TestFunctional/parallel/MountCmd/any-port 6.84
97 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
107 TestFunctional/parallel/Version/short 0.05
108 TestFunctional/parallel/Version/components 0.49
109 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
110 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
111 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
112 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
113 TestFunctional/parallel/ImageCommands/ImageBuild 2.4
114 TestFunctional/parallel/ImageCommands/Setup 0.98
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.88
116 TestFunctional/parallel/MountCmd/specific-port 2.11
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.72
118 TestFunctional/parallel/MountCmd/VerifyCleanup 1.46
119 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.77
120 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.26
121 TestFunctional/parallel/ImageCommands/ImageRemove 1.44
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
126 TestFunctional/parallel/ServiceCmd/DeployApp 11.31
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.11
128 TestFunctional/parallel/ServiceCmd/List 1.22
129 TestFunctional/parallel/ServiceCmd/JSONOutput 1.22
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
131 TestFunctional/parallel/ServiceCmd/Format 0.28
132 TestFunctional/parallel/ServiceCmd/URL 0.27
133 TestFunctional/delete_addon-resizer_images 0.06
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 204.06
140 TestMultiControlPlane/serial/DeployApp 4.33
141 TestMultiControlPlane/serial/PingHostFromPods 1.21
142 TestMultiControlPlane/serial/AddWorkerNode 45.91
143 TestMultiControlPlane/serial/NodeLabels 0.07
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
145 TestMultiControlPlane/serial/CopyFile 12.67
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
151 TestMultiControlPlane/serial/DeleteSecondaryNode 17.19
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
154 TestMultiControlPlane/serial/RestartCluster 345.03
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
156 TestMultiControlPlane/serial/AddSecondaryNode 70.12
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
161 TestJSONOutput/start/Command 56.94
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.73
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.63
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.36
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.19
189 TestMainNoArgs 0.04
190 TestMinikubeProfile 87.25
193 TestMountStart/serial/StartWithMountFirst 23.86
194 TestMountStart/serial/VerifyMountFirst 0.36
195 TestMountStart/serial/StartWithMountSecond 23.69
196 TestMountStart/serial/VerifyMountSecond 0.35
197 TestMountStart/serial/DeleteFirst 0.89
198 TestMountStart/serial/VerifyMountPostDelete 0.36
199 TestMountStart/serial/Stop 1.27
200 TestMountStart/serial/RestartStopped 22.11
201 TestMountStart/serial/VerifyMountPostStop 0.37
204 TestMultiNode/serial/FreshStart2Nodes 100.89
205 TestMultiNode/serial/DeployApp2Nodes 4.01
206 TestMultiNode/serial/PingHostFrom2Pods 0.78
207 TestMultiNode/serial/AddNode 38.67
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.21
210 TestMultiNode/serial/CopyFile 6.98
211 TestMultiNode/serial/StopNode 2.43
212 TestMultiNode/serial/StartAfterStop 25.99
214 TestMultiNode/serial/DeleteNode 2.22
216 TestMultiNode/serial/RestartMultiNode 175
217 TestMultiNode/serial/ValidateNameConflict 43.46
224 TestScheduledStopUnix 109.55
228 TestRunningBinaryUpgrade 218.84
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
234 TestNoKubernetes/serial/StartWithK8s 90.05
235 TestNoKubernetes/serial/StartWithStopK8s 13.4
236 TestNoKubernetes/serial/Start 48.98
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
246 TestNoKubernetes/serial/ProfileList 1.39
250 TestNoKubernetes/serial/Stop 1.43
251 TestNoKubernetes/serial/StartNoArgs 21.61
252 TestStoppedBinaryUpgrade/Setup 0.44
253 TestStoppedBinaryUpgrade/Upgrade 136.38
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
264 TestPause/serial/Start 86.1
x
+
TestDownloadOnly/v1.20.0/json-events (7.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-615228 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-615228 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.102416887s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-615228
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-615228: exit status 85 (57.818173ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-615228 | jenkins | v1.33.1 | 20 May 24 11:52 UTC |          |
	|         | -p download-only-615228        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:52:07
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:52:07.916190  860346 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:52:07.916474  860346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:52:07.916485  860346 out.go:304] Setting ErrFile to fd 2...
	I0520 11:52:07.916492  860346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:52:07.916667  860346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	W0520 11:52:07.916801  860346 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18932-852915/.minikube/config/config.json: open /home/jenkins/minikube-integration/18932-852915/.minikube/config/config.json: no such file or directory
	I0520 11:52:07.917378  860346 out.go:298] Setting JSON to true
	I0520 11:52:07.918307  860346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5676,"bootTime":1716200252,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:52:07.918366  860346 start.go:139] virtualization: kvm guest
	I0520 11:52:07.920660  860346 out.go:97] [download-only-615228] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:52:07.921940  860346 out.go:169] MINIKUBE_LOCATION=18932
	W0520 11:52:07.920768  860346 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 11:52:07.920796  860346 notify.go:220] Checking for updates...
	I0520 11:52:07.924307  860346 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:52:07.925574  860346 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 11:52:07.926699  860346 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 11:52:07.927951  860346 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0520 11:52:07.930028  860346 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 11:52:07.930259  860346 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:52:07.960857  860346 out.go:97] Using the kvm2 driver based on user configuration
	I0520 11:52:07.960879  860346 start.go:297] selected driver: kvm2
	I0520 11:52:07.960892  860346 start.go:901] validating driver "kvm2" against <nil>
	I0520 11:52:07.961202  860346 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:52:07.961266  860346 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18932-852915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:52:07.976489  860346 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:52:07.976536  860346 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:52:07.977033  860346 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0520 11:52:07.977179  860346 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 11:52:07.977206  860346 cni.go:84] Creating CNI manager for ""
	I0520 11:52:07.977214  860346 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:07.977222  860346 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 11:52:07.977267  860346 start.go:340] cluster config:
	{Name:download-only-615228 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-615228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:52:07.977429  860346 iso.go:125] acquiring lock: {Name:mk3157c164caa8ae686ff04303afbc15ebd2dfcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:52:07.979310  860346 out.go:97] Downloading VM boot image ...
	I0520 11:52:07.979342  860346 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18932-852915/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 11:52:10.680855  860346 out.go:97] Starting "download-only-615228" primary control-plane node in "download-only-615228" cluster
	I0520 11:52:10.680888  860346 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:52:10.704347  860346 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 11:52:10.704376  860346 cache.go:56] Caching tarball of preloaded images
	I0520 11:52:10.704531  860346 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:52:10.706110  860346 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 11:52:10.706121  860346 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0520 11:52:10.730510  860346 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18932-852915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-615228 host does not exist
	  To start a cluster, run: "minikube start -p download-only-615228"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-615228
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (3.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-727530 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-727530 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.595411025s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (3.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-727530
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-727530: exit status 85 (54.570636ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-615228 | jenkins | v1.33.1 | 20 May 24 11:52 UTC |                     |
	|         | -p download-only-615228        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 May 24 11:52 UTC | 20 May 24 11:52 UTC |
	| delete  | -p download-only-615228        | download-only-615228 | jenkins | v1.33.1 | 20 May 24 11:52 UTC | 20 May 24 11:52 UTC |
	| start   | -o=json --download-only        | download-only-727530 | jenkins | v1.33.1 | 20 May 24 11:52 UTC |                     |
	|         | -p download-only-727530        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:52:15
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:52:15.326506  860517 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:52:15.326774  860517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:52:15.326785  860517 out.go:304] Setting ErrFile to fd 2...
	I0520 11:52:15.326789  860517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:52:15.327064  860517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 11:52:15.327608  860517 out.go:298] Setting JSON to true
	I0520 11:52:15.328552  860517 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5683,"bootTime":1716200252,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:52:15.328606  860517 start.go:139] virtualization: kvm guest
	I0520 11:52:15.330890  860517 out.go:97] [download-only-727530] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:52:15.332167  860517 out.go:169] MINIKUBE_LOCATION=18932
	I0520 11:52:15.331027  860517 notify.go:220] Checking for updates...
	I0520 11:52:15.334402  860517 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:52:15.335712  860517 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 11:52:15.337021  860517 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 11:52:15.338415  860517 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-727530 host does not exist
	  To start a cluster, run: "minikube start -p download-only-727530"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-727530
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-722311 --alsologtostderr --binary-mirror http://127.0.0.1:33213 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-722311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-722311
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (98.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-739078 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-739078 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.094257602s)
helpers_test.go:175: Cleaning up "offline-crio-739078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-739078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-739078: (1.018884498s)
--- PASS: TestOffline (98.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-972916
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-972916: exit status 85 (46.177946ms)

                                                
                                                
-- stdout --
	* Profile "addons-972916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-972916"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-972916
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-972916: exit status 85 (48.035388ms)

                                                
                                                
-- stdout --
	* Profile "addons-972916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-972916"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (48.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-043975 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-043975 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (47.228893078s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-043975 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-043975 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-043975 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-043975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-043975
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-043975: (1.06146785s)
--- PASS: TestCertOptions (48.78s)

                                                
                                    
x
+
TestCertExpiration (320.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-866786 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-866786 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m6.130261884s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-866786 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-866786 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m13.390423797s)
helpers_test.go:175: Cleaning up "cert-expiration-866786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-866786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-866786: (1.093833308s)
--- PASS: TestCertExpiration (320.62s)

                                                
                                    
x
+
TestForceSystemdFlag (53.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-783351 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-783351 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (51.885473972s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-783351 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-783351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-783351
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-783351: (1.132008665s)
--- PASS: TestForceSystemdFlag (53.23s)

                                                
                                    
x
+
TestForceSystemdEnv (49.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-214339 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-214339 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.26232224s)
helpers_test.go:175: Cleaning up "force-systemd-env-214339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-214339
--- PASS: TestForceSystemdEnv (49.23s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (4.77s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 stop: (2.293827161s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 stop: (1.283524117s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-836913 --log_dir /tmp/nospam-836913 stop: (1.196008246s)
--- PASS: TestErrorSpam/stop (4.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18932-852915/.minikube/files/etc/test/nested/copy/860334/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195764 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-195764 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m35.641497218s)
--- PASS: TestFunctional/serial/StartWithProxy (95.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195764 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-195764 --alsologtostderr -v=8: (34.344746352s)
functional_test.go:659: soft start took 34.345585574s for "functional-195764" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-195764 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 cache add registry.k8s.io/pause:3.1: (1.008730279s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 cache add registry.k8s.io/pause:3.3: (1.113577133s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 cache add registry.k8s.io/pause:latest: (1.128171043s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-195764 /tmp/TestFunctionalserialCacheCmdcacheadd_local3438789841/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cache add minikube-local-cache-test:functional-195764
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cache delete minikube-local-cache-test:functional-195764
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-195764
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.043722ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 kubectl -- --context functional-195764 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-195764 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195764 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-195764 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.597787009s)
functional_test.go:757: restart took 35.597898189s for "functional-195764" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-195764 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 logs: (1.382153672s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 logs --file /tmp/TestFunctionalserialLogsFileCmd3829242828/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 logs --file /tmp/TestFunctionalserialLogsFileCmd3829242828/001/logs.txt: (1.407345139s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-195764 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-195764
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-195764: exit status 115 (268.13327ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.132:31133 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-195764 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 config get cpus: exit status 14 (53.924782ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 config get cpus: exit status 14 (53.947888ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-195764 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-195764 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 872865: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195764 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-195764 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (168.971826ms)

                                                
                                                
-- stdout --
	* [functional-195764] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:36:10.070594  872376 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:36:10.070690  872376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:36:10.070697  872376 out.go:304] Setting ErrFile to fd 2...
	I0520 12:36:10.070701  872376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:36:10.070951  872376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:36:10.071488  872376 out.go:298] Setting JSON to false
	I0520 12:36:10.072743  872376 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8318,"bootTime":1716200252,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:36:10.072824  872376 start.go:139] virtualization: kvm guest
	I0520 12:36:10.074964  872376 out.go:177] * [functional-195764] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:36:10.076469  872376 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 12:36:10.076441  872376 notify.go:220] Checking for updates...
	I0520 12:36:10.077552  872376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:36:10.078765  872376 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:36:10.080047  872376 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:36:10.081244  872376 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:36:10.082391  872376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:36:10.084134  872376 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:36:10.084940  872376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:36:10.085029  872376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:36:10.100248  872376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0520 12:36:10.100742  872376 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:36:10.101326  872376 main.go:141] libmachine: Using API Version  1
	I0520 12:36:10.101348  872376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:36:10.101707  872376 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:36:10.101899  872376 main.go:141] libmachine: (functional-195764) Calling .DriverName
	I0520 12:36:10.102182  872376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:36:10.102530  872376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:36:10.102582  872376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:36:10.118010  872376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I0520 12:36:10.118404  872376 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:36:10.118918  872376 main.go:141] libmachine: Using API Version  1
	I0520 12:36:10.118943  872376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:36:10.119369  872376 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:36:10.119553  872376 main.go:141] libmachine: (functional-195764) Calling .DriverName
	I0520 12:36:10.159049  872376 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 12:36:10.160310  872376 start.go:297] selected driver: kvm2
	I0520 12:36:10.160332  872376 start.go:901] validating driver "kvm2" against &{Name:functional-195764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-195764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:36:10.160469  872376 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:36:10.162905  872376 out.go:177] 
	W0520 12:36:10.164127  872376 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 12:36:10.165362  872376 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195764 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-195764 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-195764 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.314127ms)

                                                
                                                
-- stdout --
	* [functional-195764] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 12:36:09.806663  872226 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:36:09.806835  872226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:36:09.806862  872226 out.go:304] Setting ErrFile to fd 2...
	I0520 12:36:09.806869  872226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:36:09.807139  872226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 12:36:09.807627  872226 out.go:298] Setting JSON to false
	I0520 12:36:09.808750  872226 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8318,"bootTime":1716200252,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:36:09.808811  872226 start.go:139] virtualization: kvm guest
	I0520 12:36:09.810615  872226 out.go:177] * [functional-195764] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0520 12:36:09.812241  872226 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 12:36:09.813622  872226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:36:09.812314  872226 notify.go:220] Checking for updates...
	I0520 12:36:09.816191  872226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	I0520 12:36:09.817766  872226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	I0520 12:36:09.819149  872226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:36:09.820415  872226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:36:09.821913  872226 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:36:09.822328  872226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:36:09.822420  872226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:36:09.839267  872226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37523
	I0520 12:36:09.839863  872226 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:36:09.840578  872226 main.go:141] libmachine: Using API Version  1
	I0520 12:36:09.840598  872226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:36:09.840987  872226 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:36:09.841216  872226 main.go:141] libmachine: (functional-195764) Calling .DriverName
	I0520 12:36:09.841546  872226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:36:09.841828  872226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:36:09.841897  872226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:36:09.859213  872226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I0520 12:36:09.859590  872226 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:36:09.860083  872226 main.go:141] libmachine: Using API Version  1
	I0520 12:36:09.860106  872226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:36:09.860402  872226 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:36:09.860570  872226 main.go:141] libmachine: (functional-195764) Calling .DriverName
	I0520 12:36:09.899388  872226 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0520 12:36:09.900663  872226 start.go:297] selected driver: kvm2
	I0520 12:36:09.900697  872226 start.go:901] validating driver "kvm2" against &{Name:functional-195764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-195764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:36:09.900851  872226 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:36:09.903256  872226 out.go:177] 
	W0520 12:36:09.904358  872226 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 12:36:09.905561  872226 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-195764 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-195764 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2bwm5" [70efc6da-a116-4365-9738-f1bf2722f5ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2bwm5" [70efc6da-a116-4365-9738-f1bf2722f5ae] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.004114487s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.132:31946
functional_test.go:1671: http://192.168.39.132:31946: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2bwm5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.132:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.132:31946
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3eee1d88-b5dc-4fb7-92e5-86a360d36671] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004005799s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-195764 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-195764 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-195764 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-195764 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-195764 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [65e9e38e-ad41-4f01-bbe6-530d9917c259] Pending
helpers_test.go:344: "sp-pod" [65e9e38e-ad41-4f01-bbe6-530d9917c259] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [65e9e38e-ad41-4f01-bbe6-530d9917c259] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004950255s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-195764 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-195764 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-195764 delete -f testdata/storage-provisioner/pod.yaml: (2.337994106s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-195764 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [207095bc-2e01-4b6c-be44-2269f4b514b6] Pending
helpers_test.go:344: "sp-pod" [207095bc-2e01-4b6c-be44-2269f4b514b6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [207095bc-2e01-4b6c-be44-2269f4b514b6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003336087s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-195764 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh -n functional-195764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cp functional-195764:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2818868571/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh -n functional-195764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh -n functional-195764 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-195764 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-9c7ph" [93dd5044-7ed6-463a-9b84-3c1794ab85a7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-9c7ph" [93dd5044-7ed6-463a-9b84-3c1794ab85a7] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.209776825s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-195764 exec mysql-64454c8b5c-9c7ph -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-195764 exec mysql-64454c8b5c-9c7ph -- mysql -ppassword -e "show databases;": exit status 1 (144.291242ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-195764 exec mysql-64454c8b5c-9c7ph -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-195764 exec mysql-64454c8b5c-9c7ph -- mysql -ppassword -e "show databases;": exit status 1 (140.646324ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-195764 exec mysql-64454c8b5c-9c7ph -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/860334/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo cat /etc/test/nested/copy/860334/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/860334.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo cat /etc/ssl/certs/860334.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/860334.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo cat /usr/share/ca-certificates/860334.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/8603342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo cat /etc/ssl/certs/8603342.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/8603342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo cat /usr/share/ca-certificates/8603342.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-195764 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 ssh "sudo systemctl is-active docker": exit status 1 (202.662028ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 ssh "sudo systemctl is-active containerd": exit status 1 (234.444007ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "275.709669ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "56.06651ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdany-port1906843624/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716208569912697567" to /tmp/TestFunctionalparallelMountCmdany-port1906843624/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716208569912697567" to /tmp/TestFunctionalparallelMountCmdany-port1906843624/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716208569912697567" to /tmp/TestFunctionalparallelMountCmdany-port1906843624/001/test-1716208569912697567
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.127598ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 20 12:36 created-by-test
-rw-r--r-- 1 docker docker 24 May 20 12:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 20 12:36 test-1716208569912697567
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh cat /mount-9p/test-1716208569912697567
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-195764 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2c377f4a-5c35-4546-b557-446b2317ea0c] Pending
helpers_test.go:344: "busybox-mount" [2c377f4a-5c35-4546-b557-446b2317ea0c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2c377f4a-5c35-4546-b557-446b2317ea0c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2c377f4a-5c35-4546-b557-446b2317ea0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004803664s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-195764 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdany-port1906843624/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.84s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "242.551521ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "48.778771ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195764 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-195764
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-195764
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195764 image ls --format short --alsologtostderr:
I0520 12:36:46.064148  874418 out.go:291] Setting OutFile to fd 1 ...
I0520 12:36:46.064369  874418 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:46.064378  874418 out.go:304] Setting ErrFile to fd 2...
I0520 12:36:46.064381  874418 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:46.064547  874418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
I0520 12:36:46.065110  874418 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:46.065200  874418 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:46.065501  874418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:46.065555  874418 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:46.080625  874418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39171
I0520 12:36:46.081059  874418 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:46.081581  874418 main.go:141] libmachine: Using API Version  1
I0520 12:36:46.081605  874418 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:46.082020  874418 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:46.082252  874418 main.go:141] libmachine: (functional-195764) Calling .GetState
I0520 12:36:46.084185  874418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:46.084227  874418 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:46.098820  874418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
I0520 12:36:46.099191  874418 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:46.099588  874418 main.go:141] libmachine: Using API Version  1
I0520 12:36:46.099612  874418 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:46.099885  874418 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:46.100086  874418 main.go:141] libmachine: (functional-195764) Calling .DriverName
I0520 12:36:46.100280  874418 ssh_runner.go:195] Run: systemctl --version
I0520 12:36:46.100304  874418 main.go:141] libmachine: (functional-195764) Calling .GetSSHHostname
I0520 12:36:46.102708  874418 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:46.103089  874418 main.go:141] libmachine: (functional-195764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:1a:15", ip: ""} in network mk-functional-195764: {Iface:virbr1 ExpiryTime:2024-05-20 13:33:24 +0000 UTC Type:0 Mac:52:54:00:b9:1a:15 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-195764 Clientid:01:52:54:00:b9:1a:15}
I0520 12:36:46.103126  874418 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined IP address 192.168.39.132 and MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:46.103294  874418 main.go:141] libmachine: (functional-195764) Calling .GetSSHPort
I0520 12:36:46.103459  874418 main.go:141] libmachine: (functional-195764) Calling .GetSSHKeyPath
I0520 12:36:46.103621  874418 main.go:141] libmachine: (functional-195764) Calling .GetSSHUsername
I0520 12:36:46.103747  874418 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/functional-195764/id_rsa Username:docker}
I0520 12:36:46.190667  874418 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 12:36:46.286543  874418 main.go:141] libmachine: Making call to close driver server
I0520 12:36:46.286561  874418 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:46.286917  874418 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:46.286937  874418 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 12:36:46.286945  874418 main.go:141] libmachine: Making call to close driver server
I0520 12:36:46.286954  874418 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:46.287210  874418 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:46.287261  874418 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195764 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-195764  | 3a2c44c4a4f28 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | e784f4560448b | 192MB  |
| localhost/my-image                      | functional-195764  | f4a81bc560f7b | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| gcr.io/google-containers/addon-resizer  | functional-195764  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195764 image ls --format table --alsologtostderr:
I0520 12:36:49.328605  874572 out.go:291] Setting OutFile to fd 1 ...
I0520 12:36:49.328882  874572 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:49.328893  874572 out.go:304] Setting ErrFile to fd 2...
I0520 12:36:49.328897  874572 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:49.329127  874572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
I0520 12:36:49.329775  874572 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:49.329916  874572 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:49.330467  874572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:49.330603  874572 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:49.346406  874572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
I0520 12:36:49.347017  874572 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:49.347626  874572 main.go:141] libmachine: Using API Version  1
I0520 12:36:49.347653  874572 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:49.347958  874572 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:49.348139  874572 main.go:141] libmachine: (functional-195764) Calling .GetState
I0520 12:36:49.349928  874572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:49.349990  874572 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:49.364725  874572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
I0520 12:36:49.365239  874572 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:49.365781  874572 main.go:141] libmachine: Using API Version  1
I0520 12:36:49.365805  874572 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:49.366120  874572 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:49.366287  874572 main.go:141] libmachine: (functional-195764) Calling .DriverName
I0520 12:36:49.366521  874572 ssh_runner.go:195] Run: systemctl --version
I0520 12:36:49.366548  874572 main.go:141] libmachine: (functional-195764) Calling .GetSSHHostname
I0520 12:36:49.369468  874572 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:49.369924  874572 main.go:141] libmachine: (functional-195764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:1a:15", ip: ""} in network mk-functional-195764: {Iface:virbr1 ExpiryTime:2024-05-20 13:33:24 +0000 UTC Type:0 Mac:52:54:00:b9:1a:15 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-195764 Clientid:01:52:54:00:b9:1a:15}
I0520 12:36:49.369954  874572 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined IP address 192.168.39.132 and MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:49.370096  874572 main.go:141] libmachine: (functional-195764) Calling .GetSSHPort
I0520 12:36:49.370260  874572 main.go:141] libmachine: (functional-195764) Calling .GetSSHKeyPath
I0520 12:36:49.370406  874572 main.go:141] libmachine: (functional-195764) Calling .GetSSHUsername
I0520 12:36:49.370559  874572 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/functional-195764/id_rsa Username:docker}
I0520 12:36:49.462049  874572 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 12:36:49.500978  874572 main.go:141] libmachine: Making call to close driver server
I0520 12:36:49.500997  874572 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:49.501275  874572 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:49.501291  874572 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 12:36:49.501300  874572 main.go:141] libmachine: Making call to close driver server
I0520 12:36:49.501308  874572 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:49.501642  874572 main.go:141] libmachine: (functional-195764) DBG | Closing plugin on server side
I0520 12:36:49.501641  874572 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:49.501673  874572 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195764 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a59698bd17762166e767fd6812da4fc1d680168d3441ad301551c823411d9071","repoDigests":["docker.io/library/12ccca71931cfa1a24a3126f399792d648ab2b3a1fec751fb88f3fc95b89517b-tmp@sha256:492a689a1d5251df155419a187ea3d2cc0b66b7ff0c93da5265234a2
110196d9"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"f4a
81bc560f7b39b94596d992b23bdab18f227c26f5903371e25f672a3c59b48","repoDigests":["localhost/my-image@sha256:3ce276c181030980b47611dd28d358fdeb593b77a64c7c79c8b87defd207743d"],"repoTags":["localhost/my-image:functional-195764"],"size":"1468600"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070","repoDigests":["docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c","docker.io/
library/nginx@sha256:e688fed0b0c7513a63364959e7d389c37ac8ecac7a6c6a31455eca2f5a71ab8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"191805953"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-195764"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3a2c44c4a4f28f3d6932bce786dd3c669036d440c06d58e4a4e945d50f4ef24c","repoDigests":["localhost/minikube-local-cache-test@sha256:259762ee4dfe654841e22c64162eb6af1e85a1d167c46c78545
d84eab9ebd87d"],"repoTags":["localhost/minikube-local-cache-test:functional-195764"],"size":"3330"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":
["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"25a1387cdab8216
6df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"112170310"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195764 image ls --format json --alsologtostderr:
I0520 12:36:49.108193  874543 out.go:291] Setting OutFile to fd 1 ...
I0520 12:36:49.108332  874543 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:49.108343  874543 out.go:304] Setting ErrFile to fd 2...
I0520 12:36:49.108349  874543 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:49.108554  874543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
I0520 12:36:49.109105  874543 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:49.109223  874543 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:49.109639  874543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:49.109712  874543 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:49.124767  874543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
I0520 12:36:49.125254  874543 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:49.125798  874543 main.go:141] libmachine: Using API Version  1
I0520 12:36:49.125823  874543 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:49.126153  874543 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:49.126363  874543 main.go:141] libmachine: (functional-195764) Calling .GetState
I0520 12:36:49.128153  874543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:49.128191  874543 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:49.142698  874543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
I0520 12:36:49.143150  874543 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:49.143629  874543 main.go:141] libmachine: Using API Version  1
I0520 12:36:49.143650  874543 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:49.144009  874543 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:49.144185  874543 main.go:141] libmachine: (functional-195764) Calling .DriverName
I0520 12:36:49.144380  874543 ssh_runner.go:195] Run: systemctl --version
I0520 12:36:49.144403  874543 main.go:141] libmachine: (functional-195764) Calling .GetSSHHostname
I0520 12:36:49.146607  874543 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:49.147001  874543 main.go:141] libmachine: (functional-195764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:1a:15", ip: ""} in network mk-functional-195764: {Iface:virbr1 ExpiryTime:2024-05-20 13:33:24 +0000 UTC Type:0 Mac:52:54:00:b9:1a:15 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-195764 Clientid:01:52:54:00:b9:1a:15}
I0520 12:36:49.147035  874543 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined IP address 192.168.39.132 and MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:49.147149  874543 main.go:141] libmachine: (functional-195764) Calling .GetSSHPort
I0520 12:36:49.147317  874543 main.go:141] libmachine: (functional-195764) Calling .GetSSHKeyPath
I0520 12:36:49.147461  874543 main.go:141] libmachine: (functional-195764) Calling .GetSSHUsername
I0520 12:36:49.147566  874543 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/functional-195764/id_rsa Username:docker}
I0520 12:36:49.229797  874543 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 12:36:49.273533  874543 main.go:141] libmachine: Making call to close driver server
I0520 12:36:49.273547  874543 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:49.273831  874543 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:49.273858  874543 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 12:36:49.273861  874543 main.go:141] libmachine: (functional-195764) DBG | Closing plugin on server side
I0520 12:36:49.273876  874543 main.go:141] libmachine: Making call to close driver server
I0520 12:36:49.273886  874543 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:49.274118  874543 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:49.274132  874543 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 12:36:49.274151  874543 main.go:141] libmachine: (functional-195764) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195764 image ls --format yaml --alsologtostderr:
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-195764
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3a2c44c4a4f28f3d6932bce786dd3c669036d440c06d58e4a4e945d50f4ef24c
repoDigests:
- localhost/minikube-local-cache-test@sha256:259762ee4dfe654841e22c64162eb6af1e85a1d167c46c78545d84eab9ebd87d
repoTags:
- localhost/minikube-local-cache-test:functional-195764
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070
repoDigests:
- docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c
- docker.io/library/nginx@sha256:e688fed0b0c7513a63364959e7d389c37ac8ecac7a6c6a31455eca2f5a71ab8b
repoTags:
- docker.io/library/nginx:latest
size: "191805953"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195764 image ls --format yaml --alsologtostderr:
I0520 12:36:46.439613  874441 out.go:291] Setting OutFile to fd 1 ...
I0520 12:36:46.439745  874441 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:46.439755  874441 out.go:304] Setting ErrFile to fd 2...
I0520 12:36:46.439759  874441 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:46.439963  874441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
I0520 12:36:46.440502  874441 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:46.440593  874441 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:46.440977  874441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:46.441049  874441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:46.456339  874441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
I0520 12:36:46.456814  874441 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:46.457413  874441 main.go:141] libmachine: Using API Version  1
I0520 12:36:46.457443  874441 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:46.457754  874441 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:46.457951  874441 main.go:141] libmachine: (functional-195764) Calling .GetState
I0520 12:36:46.459955  874441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:46.460030  874441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:46.474940  874441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
I0520 12:36:46.475380  874441 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:46.475982  874441 main.go:141] libmachine: Using API Version  1
I0520 12:36:46.476025  874441 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:46.476353  874441 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:46.476565  874441 main.go:141] libmachine: (functional-195764) Calling .DriverName
I0520 12:36:46.477045  874441 ssh_runner.go:195] Run: systemctl --version
I0520 12:36:46.477088  874441 main.go:141] libmachine: (functional-195764) Calling .GetSSHHostname
I0520 12:36:46.480020  874441 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:46.480414  874441 main.go:141] libmachine: (functional-195764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:1a:15", ip: ""} in network mk-functional-195764: {Iface:virbr1 ExpiryTime:2024-05-20 13:33:24 +0000 UTC Type:0 Mac:52:54:00:b9:1a:15 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-195764 Clientid:01:52:54:00:b9:1a:15}
I0520 12:36:46.480439  874441 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined IP address 192.168.39.132 and MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:46.480579  874441 main.go:141] libmachine: (functional-195764) Calling .GetSSHPort
I0520 12:36:46.480750  874441 main.go:141] libmachine: (functional-195764) Calling .GetSSHKeyPath
I0520 12:36:46.480927  874441 main.go:141] libmachine: (functional-195764) Calling .GetSSHUsername
I0520 12:36:46.481089  874441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/functional-195764/id_rsa Username:docker}
I0520 12:36:46.593761  874441 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 12:36:46.659839  874441 main.go:141] libmachine: Making call to close driver server
I0520 12:36:46.659857  874441 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:46.660152  874441 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:46.660171  874441 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 12:36:46.660185  874441 main.go:141] libmachine: Making call to close driver server
I0520 12:36:46.660192  874441 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:46.660617  874441 main.go:141] libmachine: (functional-195764) DBG | Closing plugin on server side
I0520 12:36:46.660610  874441 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:46.660653  874441 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 ssh pgrep buildkitd: exit status 1 (182.828196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image build -t localhost/my-image:functional-195764 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image build -t localhost/my-image:functional-195764 testdata/build --alsologtostderr: (1.81903228s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-195764 image build -t localhost/my-image:functional-195764 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a59698bd177
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-195764
--> f4a81bc560f
Successfully tagged localhost/my-image:functional-195764
f4a81bc560f7b39b94596d992b23bdab18f227c26f5903371e25f672a3c59b48
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-195764 image build -t localhost/my-image:functional-195764 testdata/build --alsologtostderr:
I0520 12:36:46.890272  874496 out.go:291] Setting OutFile to fd 1 ...
I0520 12:36:46.890364  874496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:46.890371  874496 out.go:304] Setting ErrFile to fd 2...
I0520 12:36:46.890375  874496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 12:36:46.890535  874496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
I0520 12:36:46.891090  874496 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:46.891591  874496 config.go:182] Loaded profile config "functional-195764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 12:36:46.891920  874496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:46.891965  874496 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:46.907266  874496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
I0520 12:36:46.907703  874496 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:46.908220  874496 main.go:141] libmachine: Using API Version  1
I0520 12:36:46.908238  874496 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:46.908552  874496 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:46.908738  874496 main.go:141] libmachine: (functional-195764) Calling .GetState
I0520 12:36:46.910382  874496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 12:36:46.910415  874496 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 12:36:46.925541  874496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
I0520 12:36:46.925952  874496 main.go:141] libmachine: () Calling .GetVersion
I0520 12:36:46.926427  874496 main.go:141] libmachine: Using API Version  1
I0520 12:36:46.926449  874496 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 12:36:46.926718  874496 main.go:141] libmachine: () Calling .GetMachineName
I0520 12:36:46.926890  874496 main.go:141] libmachine: (functional-195764) Calling .DriverName
I0520 12:36:46.927098  874496 ssh_runner.go:195] Run: systemctl --version
I0520 12:36:46.927120  874496 main.go:141] libmachine: (functional-195764) Calling .GetSSHHostname
I0520 12:36:46.929781  874496 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:46.930165  874496 main.go:141] libmachine: (functional-195764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:1a:15", ip: ""} in network mk-functional-195764: {Iface:virbr1 ExpiryTime:2024-05-20 13:33:24 +0000 UTC Type:0 Mac:52:54:00:b9:1a:15 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-195764 Clientid:01:52:54:00:b9:1a:15}
I0520 12:36:46.930201  874496 main.go:141] libmachine: (functional-195764) DBG | domain functional-195764 has defined IP address 192.168.39.132 and MAC address 52:54:00:b9:1a:15 in network mk-functional-195764
I0520 12:36:46.930340  874496 main.go:141] libmachine: (functional-195764) Calling .GetSSHPort
I0520 12:36:46.930510  874496 main.go:141] libmachine: (functional-195764) Calling .GetSSHKeyPath
I0520 12:36:46.930635  874496 main.go:141] libmachine: (functional-195764) Calling .GetSSHUsername
I0520 12:36:46.930799  874496 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/functional-195764/id_rsa Username:docker}
I0520 12:36:47.010117  874496 build_images.go:161] Building image from path: /tmp/build.1634633625.tar
I0520 12:36:47.010192  874496 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0520 12:36:47.024457  874496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1634633625.tar
I0520 12:36:47.029069  874496 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1634633625.tar: stat -c "%s %y" /var/lib/minikube/build/build.1634633625.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1634633625.tar': No such file or directory
I0520 12:36:47.029105  874496 ssh_runner.go:362] scp /tmp/build.1634633625.tar --> /var/lib/minikube/build/build.1634633625.tar (3072 bytes)
I0520 12:36:47.054736  874496 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1634633625
I0520 12:36:47.065168  874496 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1634633625 -xf /var/lib/minikube/build/build.1634633625.tar
I0520 12:36:47.080138  874496 crio.go:315] Building image: /var/lib/minikube/build/build.1634633625
I0520 12:36:47.080212  874496 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-195764 /var/lib/minikube/build/build.1634633625 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0520 12:36:48.640715  874496 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-195764 /var/lib/minikube/build/build.1634633625 --cgroup-manager=cgroupfs: (1.560472237s)
I0520 12:36:48.640780  874496 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1634633625
I0520 12:36:48.653176  874496 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1634633625.tar
I0520 12:36:48.663723  874496 build_images.go:217] Built localhost/my-image:functional-195764 from /tmp/build.1634633625.tar
I0520 12:36:48.663760  874496 build_images.go:133] succeeded building to: functional-195764
I0520 12:36:48.663766  874496 build_images.go:134] failed building to: 
I0520 12:36:48.663797  874496 main.go:141] libmachine: Making call to close driver server
I0520 12:36:48.663812  874496 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:48.664095  874496 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:48.664117  874496 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 12:36:48.664125  874496 main.go:141] libmachine: Making call to close driver server
I0520 12:36:48.664132  874496 main.go:141] libmachine: (functional-195764) Calling .Close
I0520 12:36:48.664135  874496 main.go:141] libmachine: (functional-195764) DBG | Closing plugin on server side
I0520 12:36:48.664352  874496 main.go:141] libmachine: Successfully made call to close driver server
I0520 12:36:48.664367  874496 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-195764
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image load --daemon gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image load --daemon gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr: (4.619842447s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdspecific-port2541572529/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.045776ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdspecific-port2541572529/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 ssh "sudo umount -f /mount-9p": exit status 1 (261.07949ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-195764 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdspecific-port2541572529/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image load --daemon gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image load --daemon gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr: (3.436247386s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074468885/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074468885/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074468885/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T" /mount1: exit status 1 (317.326636ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-195764 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074468885/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074468885/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-195764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074468885/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.132889519s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-195764
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image load --daemon gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr
2024/05/20 12:36:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image load --daemon gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr: (8.370333324s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image save gcr.io/google-containers/addon-resizer:functional-195764 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image save gcr.io/google-containers/addon-resizer:functional-195764 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.261400948s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image rm gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image rm gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr: (1.046303901s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-195764 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-195764 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-f88j8" [9d2837c8-d7a0-443d-abf0-05bdc1981c6d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-f88j8" [9d2837c8-d7a0-443d-abf0-05bdc1981c6d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004098405s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-195764
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 image save --daemon gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 image save --daemon gcr.io/google-containers/addon-resizer:functional-195764 --alsologtostderr: (3.074547188s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-195764
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 service list: (1.224748114s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-195764 service list -o json: (1.223767838s)
functional_test.go:1490: Took "1.223853657s" to run "out/minikube-linux-amd64 -p functional-195764 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.132:31888
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-195764 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.132:31888
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-195764
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-195764
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-195764
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-252263 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-252263 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m23.372131638s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-252263 -- rollout status deployment/busybox: (2.161320717s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-vdgxd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xq6j6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xqdrj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-vdgxd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xq6j6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xqdrj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-vdgxd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xq6j6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xqdrj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-vdgxd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-vdgxd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xq6j6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xq6j6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xqdrj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-252263 -- exec busybox-fc5497c4f-xqdrj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-252263 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-252263 -v=7 --alsologtostderr: (45.089905174s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
E0520 12:41:10.516930  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:41:10.523022  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:41:10.533253  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:41:10.553517  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:41:10.594361  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:41:10.675294  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:41:10.835862  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-252263 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0520 12:41:11.156124  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status --output json -v=7 --alsologtostderr
E0520 12:41:11.796349  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp testdata/cp-test.txt ha-252263:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263 "sudo cat /home/docker/cp-test.txt"
E0520 12:41:13.077365  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263:/home/docker/cp-test.txt ha-252263-m02:/home/docker/cp-test_ha-252263_ha-252263-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m02 "sudo cat /home/docker/cp-test_ha-252263_ha-252263-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263:/home/docker/cp-test.txt ha-252263-m03:/home/docker/cp-test_ha-252263_ha-252263-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m03 "sudo cat /home/docker/cp-test_ha-252263_ha-252263-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263:/home/docker/cp-test.txt ha-252263-m04:/home/docker/cp-test_ha-252263_ha-252263-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m04 "sudo cat /home/docker/cp-test_ha-252263_ha-252263-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp testdata/cp-test.txt ha-252263-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m02 "sudo cat /home/docker/cp-test.txt"
E0520 12:41:15.638429  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m02:/home/docker/cp-test.txt ha-252263:/home/docker/cp-test_ha-252263-m02_ha-252263.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263 "sudo cat /home/docker/cp-test_ha-252263-m02_ha-252263.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m02:/home/docker/cp-test.txt ha-252263-m03:/home/docker/cp-test_ha-252263-m02_ha-252263-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m03 "sudo cat /home/docker/cp-test_ha-252263-m02_ha-252263-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m02:/home/docker/cp-test.txt ha-252263-m04:/home/docker/cp-test_ha-252263-m02_ha-252263-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m04 "sudo cat /home/docker/cp-test_ha-252263-m02_ha-252263-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp testdata/cp-test.txt ha-252263-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt ha-252263:/home/docker/cp-test_ha-252263-m03_ha-252263.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263 "sudo cat /home/docker/cp-test_ha-252263-m03_ha-252263.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt ha-252263-m02:/home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m02 "sudo cat /home/docker/cp-test_ha-252263-m03_ha-252263-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m03:/home/docker/cp-test.txt ha-252263-m04:/home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt
E0520 12:41:20.758996  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m04 "sudo cat /home/docker/cp-test_ha-252263-m03_ha-252263-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp testdata/cp-test.txt ha-252263-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile233320252/001/cp-test_ha-252263-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt ha-252263:/home/docker/cp-test_ha-252263-m04_ha-252263.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263 "sudo cat /home/docker/cp-test_ha-252263-m04_ha-252263.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt ha-252263-m02:/home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m02 "sudo cat /home/docker/cp-test_ha-252263-m04_ha-252263-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 cp ha-252263-m04:/home/docker/cp-test.txt ha-252263-m03:/home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 ssh -n ha-252263-m03 "sudo cat /home/docker/cp-test_ha-252263-m04_ha-252263-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.486733437s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-252263 node delete m03 -v=7 --alsologtostderr: (16.453094933s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (345.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-252263 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 12:56:10.517019  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
E0520 12:57:33.562695  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-252263 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m44.248634303s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (345.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-252263 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-252263 --control-plane -v=7 --alsologtostderr: (1m9.303808324s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-252263 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-067985 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0520 13:01:10.516020  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-067985 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.941655146s)
--- PASS: TestJSONOutput/start/Command (56.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-067985 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-067985 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-067985 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-067985 --output=json --user=testUser: (7.359655202s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-358710 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-358710 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.655736ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b51e5136-42f5-45cc-8da8-f9621016a118","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-358710] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f3018b2-9850-468e-a9b7-4ce685dd4e8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18932"}}
	{"specversion":"1.0","id":"eb4cb82d-4ab9-4e79-82e3-14dacf5f65a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"806e40f7-ff4f-4541-8730-8ddfc113dd7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig"}}
	{"specversion":"1.0","id":"bea78a41-5177-4cf3-a909-61b8fa7f5205","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube"}}
	{"specversion":"1.0","id":"a41276ac-a682-4970-bc55-d4c33549a04d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a7a2159a-fd73-49ae-ada2-2575edd518fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"14d6485d-b341-4ee7-9fd6-cc60fcdf4050","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-358710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-358710
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-876350 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-876350 --driver=kvm2  --container-runtime=crio: (40.378293587s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-878474 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-878474 --driver=kvm2  --container-runtime=crio: (44.058588423s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-876350
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-878474
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-878474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-878474
helpers_test.go:175: Cleaning up "first-876350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-876350
--- PASS: TestMinikubeProfile (87.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-096926 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-096926 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.858447421s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-096926 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-096926 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-111864 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-111864 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.688253555s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111864 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111864 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-096926 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111864 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111864 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-111864
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-111864: (1.265702353s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-111864
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-111864: (21.107089306s)
--- PASS: TestMountStart/serial/RestartStopped (22.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111864 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111864 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865571 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 13:06:10.516799  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-865571 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m40.497715223s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-865571 -- rollout status deployment/busybox: (2.551951206s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-8qcm5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-c8hj2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-8qcm5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-c8hj2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-8qcm5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-c8hj2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-8qcm5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-8qcm5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-c8hj2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865571 -- exec busybox-fc5497c4f-c8hj2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (38.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-865571 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-865571 -v 3 --alsologtostderr: (38.117972195s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (38.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-865571 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp testdata/cp-test.txt multinode-865571:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile540683293/001/cp-test_multinode-865571.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571:/home/docker/cp-test.txt multinode-865571-m02:/home/docker/cp-test_multinode-865571_multinode-865571-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m02 "sudo cat /home/docker/cp-test_multinode-865571_multinode-865571-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571:/home/docker/cp-test.txt multinode-865571-m03:/home/docker/cp-test_multinode-865571_multinode-865571-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m03 "sudo cat /home/docker/cp-test_multinode-865571_multinode-865571-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp testdata/cp-test.txt multinode-865571-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile540683293/001/cp-test_multinode-865571-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571-m02:/home/docker/cp-test.txt multinode-865571:/home/docker/cp-test_multinode-865571-m02_multinode-865571.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571 "sudo cat /home/docker/cp-test_multinode-865571-m02_multinode-865571.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571-m02:/home/docker/cp-test.txt multinode-865571-m03:/home/docker/cp-test_multinode-865571-m02_multinode-865571-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m03 "sudo cat /home/docker/cp-test_multinode-865571-m02_multinode-865571-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp testdata/cp-test.txt multinode-865571-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile540683293/001/cp-test_multinode-865571-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt multinode-865571:/home/docker/cp-test_multinode-865571-m03_multinode-865571.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571 "sudo cat /home/docker/cp-test_multinode-865571-m03_multinode-865571.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 cp multinode-865571-m03:/home/docker/cp-test.txt multinode-865571-m02:/home/docker/cp-test_multinode-865571-m03_multinode-865571-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 ssh -n multinode-865571-m02 "sudo cat /home/docker/cp-test_multinode-865571-m03_multinode-865571-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-865571 node stop m03: (1.589455198s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-865571 status: exit status 7 (427.749194ms)

                                                
                                                
-- stdout --
	multinode-865571
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-865571-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-865571-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-865571 status --alsologtostderr: exit status 7 (414.44739ms)

                                                
                                                
-- stdout --
	multinode-865571
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-865571-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-865571-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 13:07:13.020447  891745 out.go:291] Setting OutFile to fd 1 ...
	I0520 13:07:13.020719  891745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:07:13.020731  891745 out.go:304] Setting ErrFile to fd 2...
	I0520 13:07:13.020737  891745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 13:07:13.020936  891745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18932-852915/.minikube/bin
	I0520 13:07:13.021124  891745 out.go:298] Setting JSON to false
	I0520 13:07:13.021157  891745 mustload.go:65] Loading cluster: multinode-865571
	I0520 13:07:13.021244  891745 notify.go:220] Checking for updates...
	I0520 13:07:13.021634  891745 config.go:182] Loaded profile config "multinode-865571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 13:07:13.021654  891745 status.go:255] checking status of multinode-865571 ...
	I0520 13:07:13.022024  891745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:07:13.022090  891745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:07:13.041637  891745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39957
	I0520 13:07:13.042034  891745 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:07:13.042601  891745 main.go:141] libmachine: Using API Version  1
	I0520 13:07:13.042623  891745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:07:13.042973  891745 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:07:13.043167  891745 main.go:141] libmachine: (multinode-865571) Calling .GetState
	I0520 13:07:13.044527  891745 status.go:330] multinode-865571 host status = "Running" (err=<nil>)
	I0520 13:07:13.044552  891745 host.go:66] Checking if "multinode-865571" exists ...
	I0520 13:07:13.044827  891745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:07:13.044874  891745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:07:13.059551  891745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
	I0520 13:07:13.059901  891745 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:07:13.060410  891745 main.go:141] libmachine: Using API Version  1
	I0520 13:07:13.060443  891745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:07:13.060733  891745 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:07:13.060944  891745 main.go:141] libmachine: (multinode-865571) Calling .GetIP
	I0520 13:07:13.063650  891745 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:07:13.064106  891745 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:07:13.064150  891745 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:07:13.064338  891745 host.go:66] Checking if "multinode-865571" exists ...
	I0520 13:07:13.064693  891745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:07:13.064732  891745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:07:13.079609  891745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0520 13:07:13.079970  891745 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:07:13.080400  891745 main.go:141] libmachine: Using API Version  1
	I0520 13:07:13.080420  891745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:07:13.080718  891745 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:07:13.080917  891745 main.go:141] libmachine: (multinode-865571) Calling .DriverName
	I0520 13:07:13.081101  891745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:07:13.081131  891745 main.go:141] libmachine: (multinode-865571) Calling .GetSSHHostname
	I0520 13:07:13.083707  891745 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:07:13.084106  891745 main.go:141] libmachine: (multinode-865571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:4f:fd", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:04:53 +0000 UTC Type:0 Mac:52:54:00:a4:4f:fd Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-865571 Clientid:01:52:54:00:a4:4f:fd}
	I0520 13:07:13.084130  891745 main.go:141] libmachine: (multinode-865571) DBG | domain multinode-865571 has defined IP address 192.168.39.78 and MAC address 52:54:00:a4:4f:fd in network mk-multinode-865571
	I0520 13:07:13.084278  891745 main.go:141] libmachine: (multinode-865571) Calling .GetSSHPort
	I0520 13:07:13.084455  891745 main.go:141] libmachine: (multinode-865571) Calling .GetSSHKeyPath
	I0520 13:07:13.084606  891745 main.go:141] libmachine: (multinode-865571) Calling .GetSSHUsername
	I0520 13:07:13.084763  891745 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571/id_rsa Username:docker}
	I0520 13:07:13.170411  891745 ssh_runner.go:195] Run: systemctl --version
	I0520 13:07:13.176495  891745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:07:13.191028  891745 kubeconfig.go:125] found "multinode-865571" server: "https://192.168.39.78:8443"
	I0520 13:07:13.191063  891745 api_server.go:166] Checking apiserver status ...
	I0520 13:07:13.191092  891745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 13:07:13.205183  891745 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0520 13:07:13.214487  891745 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 13:07:13.214546  891745 ssh_runner.go:195] Run: ls
	I0520 13:07:13.218669  891745 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I0520 13:07:13.223252  891745 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I0520 13:07:13.223273  891745 status.go:422] multinode-865571 apiserver status = Running (err=<nil>)
	I0520 13:07:13.223289  891745 status.go:257] multinode-865571 status: &{Name:multinode-865571 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:07:13.223309  891745 status.go:255] checking status of multinode-865571-m02 ...
	I0520 13:07:13.223601  891745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:07:13.223633  891745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:07:13.239174  891745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0520 13:07:13.239566  891745 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:07:13.240054  891745 main.go:141] libmachine: Using API Version  1
	I0520 13:07:13.240075  891745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:07:13.240413  891745 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:07:13.240637  891745 main.go:141] libmachine: (multinode-865571-m02) Calling .GetState
	I0520 13:07:13.241961  891745 status.go:330] multinode-865571-m02 host status = "Running" (err=<nil>)
	I0520 13:07:13.241981  891745 host.go:66] Checking if "multinode-865571-m02" exists ...
	I0520 13:07:13.242271  891745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:07:13.242329  891745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:07:13.256719  891745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44919
	I0520 13:07:13.257322  891745 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:07:13.257883  891745 main.go:141] libmachine: Using API Version  1
	I0520 13:07:13.257907  891745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:07:13.258371  891745 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:07:13.258543  891745 main.go:141] libmachine: (multinode-865571-m02) Calling .GetIP
	I0520 13:07:13.261074  891745 main.go:141] libmachine: (multinode-865571-m02) DBG | domain multinode-865571-m02 has defined MAC address 52:54:00:9f:3b:0c in network mk-multinode-865571
	I0520 13:07:13.261463  891745 main.go:141] libmachine: (multinode-865571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:3b:0c", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:05:54 +0000 UTC Type:0 Mac:52:54:00:9f:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-865571-m02 Clientid:01:52:54:00:9f:3b:0c}
	I0520 13:07:13.261488  891745 main.go:141] libmachine: (multinode-865571-m02) DBG | domain multinode-865571-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:9f:3b:0c in network mk-multinode-865571
	I0520 13:07:13.261631  891745 host.go:66] Checking if "multinode-865571-m02" exists ...
	I0520 13:07:13.261952  891745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:07:13.261989  891745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:07:13.276722  891745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0520 13:07:13.277116  891745 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:07:13.277578  891745 main.go:141] libmachine: Using API Version  1
	I0520 13:07:13.277599  891745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:07:13.277960  891745 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:07:13.278182  891745 main.go:141] libmachine: (multinode-865571-m02) Calling .DriverName
	I0520 13:07:13.278377  891745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 13:07:13.278398  891745 main.go:141] libmachine: (multinode-865571-m02) Calling .GetSSHHostname
	I0520 13:07:13.280977  891745 main.go:141] libmachine: (multinode-865571-m02) DBG | domain multinode-865571-m02 has defined MAC address 52:54:00:9f:3b:0c in network mk-multinode-865571
	I0520 13:07:13.281341  891745 main.go:141] libmachine: (multinode-865571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:3b:0c", ip: ""} in network mk-multinode-865571: {Iface:virbr1 ExpiryTime:2024-05-20 14:05:54 +0000 UTC Type:0 Mac:52:54:00:9f:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-865571-m02 Clientid:01:52:54:00:9f:3b:0c}
	I0520 13:07:13.281361  891745 main.go:141] libmachine: (multinode-865571-m02) DBG | domain multinode-865571-m02 has defined IP address 192.168.39.84 and MAC address 52:54:00:9f:3b:0c in network mk-multinode-865571
	I0520 13:07:13.281510  891745 main.go:141] libmachine: (multinode-865571-m02) Calling .GetSSHPort
	I0520 13:07:13.281654  891745 main.go:141] libmachine: (multinode-865571-m02) Calling .GetSSHKeyPath
	I0520 13:07:13.281777  891745 main.go:141] libmachine: (multinode-865571-m02) Calling .GetSSHUsername
	I0520 13:07:13.281919  891745 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18932-852915/.minikube/machines/multinode-865571-m02/id_rsa Username:docker}
	I0520 13:07:13.358662  891745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 13:07:13.374985  891745 status.go:257] multinode-865571-m02 status: &{Name:multinode-865571-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0520 13:07:13.375021  891745 status.go:255] checking status of multinode-865571-m03 ...
	I0520 13:07:13.375331  891745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 13:07:13.375369  891745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 13:07:13.390611  891745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
	I0520 13:07:13.391048  891745 main.go:141] libmachine: () Calling .GetVersion
	I0520 13:07:13.391526  891745 main.go:141] libmachine: Using API Version  1
	I0520 13:07:13.391550  891745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 13:07:13.391846  891745 main.go:141] libmachine: () Calling .GetMachineName
	I0520 13:07:13.392027  891745 main.go:141] libmachine: (multinode-865571-m03) Calling .GetState
	I0520 13:07:13.393473  891745 status.go:330] multinode-865571-m03 host status = "Stopped" (err=<nil>)
	I0520 13:07:13.393489  891745 status.go:343] host is not running, skipping remaining checks
	I0520 13:07:13.393497  891745 status.go:257] multinode-865571-m03 status: &{Name:multinode-865571-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-865571 node start m03 -v=7 --alsologtostderr: (25.378207392s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-865571 node delete m03: (1.707398442s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (175s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865571 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 13:16:10.516176  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-865571 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m54.453520567s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865571 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (175.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-865571
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865571-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-865571-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.951712ms)

                                                
                                                
-- stdout --
	* [multinode-865571-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-865571-m02' is duplicated with machine name 'multinode-865571-m02' in profile 'multinode-865571'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865571-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-865571-m03 --driver=kvm2  --container-runtime=crio: (42.371022802s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-865571
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-865571: exit status 80 (220.669708ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-865571 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-865571-m03 already exists in multinode-865571-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-865571-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.46s)

                                                
                                    
x
+
TestScheduledStopUnix (109.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-171322 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-171322 --memory=2048 --driver=kvm2  --container-runtime=crio: (37.986051611s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-171322 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-171322 -n scheduled-stop-171322
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-171322 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-171322 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-171322 -n scheduled-stop-171322
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-171322
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-171322 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-171322
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-171322: exit status 7 (62.885759ms)

                                                
                                                
-- stdout --
	scheduled-stop-171322
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-171322 -n scheduled-stop-171322
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-171322 -n scheduled-stop-171322: exit status 7 (65.167852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-171322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-171322
--- PASS: TestScheduledStopUnix (109.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (218.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1092516504 start -p running-upgrade-823294 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1092516504 start -p running-upgrade-823294 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.819066293s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-823294 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-823294 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.373186967s)
helpers_test.go:175: Cleaning up "running-upgrade-823294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-823294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-823294: (1.186368938s)
--- PASS: TestRunningBinaryUpgrade (218.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782572 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-782572 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.150494ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-782572] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18932-852915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18932-852915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (90.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782572 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-782572 --driver=kvm2  --container-runtime=crio: (1m29.800719401s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-782572 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (90.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782572 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-782572 --no-kubernetes --driver=kvm2  --container-runtime=crio: (12.021663683s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-782572 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-782572 status -o json: exit status 2 (300.457748ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-782572","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-782572
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-782572: (1.081075879s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782572 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-782572 --no-kubernetes --driver=kvm2  --container-runtime=crio: (48.97781724s)
--- PASS: TestNoKubernetes/serial/Start (48.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-782572 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-782572 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.79615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-782572
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-782572: (1.425589396s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782572 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-782572 --driver=kvm2  --container-runtime=crio: (21.609407999s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (136.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3377939101 start -p stopped-upgrade-456265 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0520 13:26:10.516999  860334 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18932-852915/.minikube/profiles/functional-195764/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3377939101 start -p stopped-upgrade-456265 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m10.579900027s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3377939101 -p stopped-upgrade-456265 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3377939101 -p stopped-upgrade-456265 stop: (2.142349428s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-456265 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-456265 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.655450562s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (136.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-782572 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-782572 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.699754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-456265
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestPause/serial/Start (86.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-587544 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-587544 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m26.096694819s)
--- PASS: TestPause/serial/Start (86.10s)

                                                
                                    

Test skip (32/207)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard